Search Results: "rene"

18 January 2023

Russ Allbery: Review: Forward

Review: Forward, edited by Blake Crouch
Publisher: Amazon Original Stories
Copyright: September 2019
ISBN: 1-5420-9206-X
ISBN: 1-5420-4363-8
ISBN: 1-5420-9357-0
ISBN: 1-5420-0434-9
ISBN: 1-5420-4363-8
ISBN: 1-5420-4425-1
Format: Kindle
Pages: 300
This is another Amazon collection of short fiction, this time mostly at novelette length. (The longer ones might creep into novella.) As before, each one is available separately for purchase or Amazon Prime "borrowing," with separate ISBNs. The sidebar cover is for the first in the sequence. (At some point I need to update my page templates so that I can add multiple covers.) N.K. Jemisin's "Emergency Skin" won the 2020 Hugo Award for Best Novelette, so I wanted to read and review it, but it would be too short for a standalone review. I therefore decided to read the whole collection and review it as an anthology. This was a mistake. Learn from my mistake. The overall theme of the collection is technological advance, rapid change, and the ethical and social question of whether we should slow technology because of social risk. Some of the stories stick to that theme more closely than others. Jemisin's story mostly ignores it, which was probably the right decision. "Ark" by Veronica Roth: A planet-killing asteroid has been on its inexorable way towards Earth for decades. Most of the planet has been evacuated. A small group has stayed behind, cataloging samples and filling two remaining ships with as much biodiversity as they can find with the intent to leave at the last minute. Against that backdrop, two of that team bond over orchids. If you were going "wait, what?" about the successful evacuation of Earth, yeah, me too. No hint is offered as to how this was accomplished. Also, the entirety of humanity abandoned mutual hostility and national borders to cooperate in the face of the incoming disaster, which is, uh, bizarrely optimistic for an otherwise gloomy story. I should be careful about how negative I am about this story because I am sure it will be someone's favorite. I can even write part of the positive review: an elegiac look at loss, choices, and the meaning of a life, a moving look at how people cope with despair. The writing is fine, the story structure works; it's not a bad story. I just found it monumentally depressing, and was not engrossed by the emotionally abused protagonist's unresolved father issues. I can imagine a story around the same facts and plot that I would have liked much better, but all of these people need therapy and better coping mechanisms. I'm also not sure what this had to do with the theme, given that the incoming asteroid is random chance and has nothing to do with technological development. (4) "Summer Frost" by Blake Crouch: The best part of this story is the introductory sequence before the reader knows what's going on, which is full of evocative descriptions. I'm about to spoil what is going on, so if you want to enjoy that untainted by the stupidity of the rest of the plot, skip the rest of this story review. We're going to have a glut of stories about the weird and obsessive form of AI risk invented by the fevered imaginations of the "rationalist" community, aren't we. I don't know why I didn't predict that. It's going to be just as annoying as the glut of cyberpunk novels written by people who don't understand computers. Crouch lost me as soon as the setup is revealed. Even if I believed that a game company would use a deep learning AI still in learning mode to run an NPC (I don't; see Microsoft's Tay for an obvious reason why not), or that such an NPC would spontaneously start testing the boundaries of the game world (this is not how deep learning works), Crouch asks the reader to believe that this AI started as a fully scripted NPC in the prologue with a fixed storyline. In other words, the foundation of the story is that this game company used an AI model capable of becoming a general intelligence for barely more than a cut scene. This is not how anything works. The rest of the story is yet another variation on a science fiction plot so old and threadbare that Isaac Asimov invented the Three Laws of Robotics to avoid telling more versions of it. Crouch's contribution is to dress it up in the terminology of the excessively online. (The middle of the story features a detailed discussion of Roko's basilisk; if you recognize that, you know what you're in for.) Asimov would not have had a lesbian protagonist, so points for progress I guess, but the AI becomes more interesting to the protagonist than her wife and kid because of course it does. There are a few twists and turns along the way, but the destination is the bog-standard hard-takeoff general intelligence scenario. One more pet peeve: Authors, stop trying to illustrate the growth of your AI by having it move from broken to fluent English. English grammar is so much easier than self-awareness or the Turing test that we had programs that could critique your grammar decades before we had believable chatbots. It's going to get grammar right long before the content of the words makes any sense. Also, your AI doesn't sound dumber, your AI sounds like someone whose native language doesn't use pronouns and helper verbs the way that English does, and your decision to use that as a marker for intelligence is, uh, maybe something you should think about. (3) "Emergency Skin" by N.K. Jemisin: The protagonist is a heavily-augmented cyborg from a colony of Earth's diaspora. The founders of that colony fled Earth when it became obvious to them that the planet was dying. They have survived in another star system, but they need a specific piece of technology from the dead remnants of Earth. The protagonist has been sent to retrieve it. The twist is that this story is told in the second-person perspective by the protagonist's ride-along AI, created from a consensus model of the rulers of the colony. We never see directly what the protagonist is doing or thinking, only the AI reaction to it. This is exactly the sort of gimmick that works much better in short fiction than at novel length. Jemisin uses it to create tension between the reader and the narrator, and I thoroughly enjoyed the effect. (As shown in the Broken Earth trilogy, Jemisin is one of the few writers who can use second-person effectively.) I won't spoil the revelation, but it's barbed and biting and vicious and I loved it. Jemisin does deliver the point with a sledgehammer, so be aware of that if you want subtlety in your short fiction, but I prefer the bluntness. (This is part of why I usually don't get along with literary short stories.) The reader of course can't change the direction of the story, but the second-person perspective still provides a hit of vicarious satisfaction. I can see why this won the Hugo; it's worth seeking out. (8) "You Have Arrived at Your Destination" by Amor Towles: Sam and his wife are having a child, and they've decided to provide him with an early boost in life. Vitek is a fertility lab, but more than that, it can do some gene tweaking and adjustment to push a child more towards one personality or another. Sam and his wife have spent hours filling out profiles, and his wife spent hours weeding possible choices down to three. Now, Sam has come to Vitek to pick from the remaining options. Speaking of literary short stories, Towles is the non-SFF writer of this bunch, and it's immediately obvious. The story requires the SFnal premise, but after that this is a character piece. Vitek is an elite, expensive company with a condescending and overly-reductive attitude towards humanity, which is entirely intentional on the author's part. This is the sort of story that gets resolved in an unexpected conversation in a roadside bar, and where most of the conflict happens inside the protagonist's head. I was initially going to complain that Towles does the standard literary thing of leaving off the denouement on the grounds that the reader can figure it out, but when I did a bit of re-reading for this review, I found more of the bones than I had noticed the first time. There's enough subtlety that I had to think for a bit and re-read a passage, but not too much. It's also the most thoughtful treatment of the theme of the collection, the only one that I thought truly wrestled with the weird interactions between technological capability and human foresight. Next to "Emergency Skin," this was the best story of the collection. (7) "The Last Conversation" by Paul Tremblay: A man wakes up in a dark room, in considerable pain, not remembering anything about his life. His only contact with the world at first is a voice: a woman who is helping him recover his strength and his memory. The numbers that head the chapters have significant gaps, representing days left out of the story, as he pieces together what has happened alongside the reader. Tremblay is the horror writer of the collection, so predictably this is the story whose craft I can admire without really liking it. In this case, the horror comes mostly from the pacing of revelation, created by the choice of point of view. (This would be a much different story from the perspective of the woman.) It's well-done, but it has the tendency I've noticed in other horror stories of being a tightly closed system. I see where the connection to the theme is, but it's entirely in the setting, not in the shape of the story. Not my thing, but I can see why it might be someone else's. (5) "Randomize" by Andy Weir: Gah, this was so bad. First, and somewhat expectedly, it's a clunky throwback to a 1950s-style hard SF puzzle story. The writing is atrocious: wooden, awkward, cliched, and full of gratuitous infodumping. The characterization is almost entirely through broad stereotypes; the lone exception is the female character, who at least adds an interesting twist despite being forced to act like an idiot because of the plot. It's a very old-school type of single-twist story, but the ending is completely implausible and falls apart if you breathe on it too hard. Weir is something of a throwback to an earlier era of scientific puzzle stories, though, so maybe one is inclined to give him a break on the writing quality. (I am not; one of the ways in which science fiction has improved is that you can get good scientific puzzles and good writing these days.) But the science is also so bad that I was literally facepalming while reading it. The premise of this story is that quantum computers are commercially available. That will cause a serious problem for Las Vegas casinos, because the generator for keno numbers is vulnerable to quantum algorithms. The solution proposed by the IT person for the casino? A quantum random number generator. (The words "fight quantum with quantum" appear literally in the text if you're wondering how bad the writing is.) You could convince me that an ancient keno system is using a pseudorandom number generator that might be vulnerable to some quantum algorithm and doesn't get reseeded often enough. Fine. And yes, quantum computers can be used to generate high-quality sources of random numbers. But this solution to the problem makes no sense whatsoever. It's like swatting a house fly with a nuclear weapon. Weir says explicitly in the story that all the keno system needs is an external source of high-quality random numbers. The next step is to go to Amazon and buy a hardware random number generator. If you want to splurge, it might cost you $250. Problem solved. Yes, hardware random number generators have various limitations that may cause you problems if you need millions of bits or you need them very quickly, but not for something as dead-simple and with such low entropy requirements as keno numbers! You need a trivial number of bits for each round; even the slowest and most conservative hardware random number generator would be fine. Hell, measure the noise levels on the casino floor. Point a camera at a lava lamp. Or just buy one of the physical ball machines they use for the lottery. This problem is heavily researched, by casinos in particular, and is not significantly changed by the availability of quantum computers, at least for applications such as keno where the generator can be reseeded before each generation. You could maybe argue that this is an excuse for the IT guy to get his hands on a quantum computer, which fits the stereotypes, but that still breaks the story for reasons that would be spoilers. As soon as any other casino thought about this, they'd laugh in the face of the characters. I don't want to make too much of this, since anyone can write one bad story, but this story was dire at every level. I still owe Weir a proper chance at novel length, but I can't say this added to my enthusiasm. (2) Rating: 4 out of 10

31 December 2022

Guido G nther: Phosh 2022 in retrospect

I wanted to look back at what changed in phosh in 2022 and figured I could share it with you. I'll be focusing on things very close to the mobile shell, for a broader overview see Evangelos upcoming FOSDEM talk. Some numbers We're usually aiming for a phosh release at the end of each month. In 2022 We did 10 releases like that, 7 major releases (bumping the middle version number) and three betas. We skipped the April and November releases. We also did one bug fix relesae out of line (bumping the last bit of the version number). I hope we can keep that cadence in 2023 as it allows us to get changes to users in a timely fashion (thus closing usability gaps as early as possible) as well as giving distributions a way to plan ahead. Ideally we'd not skip any release but sometimes real life just interferes. Those releases contain code contributions from about 20 different people and translations from about 30 translators. These numbers are roughly the same as 2021 which is great. Thanks everyone! In phosh's git repository we had a bit over 730 non-merge commits (roughly 2 per day), which is about 10% less than in 2021. Looking closer this is easily compensated by commits to phoc (which needed quite some work for the gestures) and phosh-mobile-settings which didn't exist in 2021. User visible features Most notable new features are likely the swipe gestures for top and bottom bar, the possibility to use the quick settings on the lock screen as well as the style refresh driven by Sam Hewitt that e.g. touched the modal dialogs (but also sliders, dialpads, etc): Style Refresh Swipe up gesture We also added the possibility to have custom widgets via loadable plugins on the lock screen so the user can decide which information should be available. We currently ship plugins to show These are maintained within phosh's source tree although out of tree plugins should be doable too. There's a settings application (the above mentioned phosh-mobile-settings) to enable these. It also allows those plugins to have individual preferences: TODO A Plugin Plugin Preferenes Speaking of configurability: Scale-to-fit settings (to work around applications that don't fit the screen) and haptic/led feedback are now configurable without resorting to the command line: Scale to fit Feedbackd settings We can also have device specific settings which helps to temporarily accumulate special workaround without affecting other phones. Other user visible features include the ability to shuffle the digits on the lockscreen's keypad, a VPN quick settings, improved screenshot support and automatic high contrast theme switching when in bright sunlight (based on ambient sensor readings) as shown here. As mentioned above Evangelos will talk at FOSDEM 2023 about the broader ecosystem improvements including GNOME, GTK, wlroots, phoc, feedbackd, ModemManager, mmsd, NetworkManager and many others without phosh wouldn't be possible. What else As I wanted a T-shirt for Debconf 2022 in Prizren so I created a logo heavily inspired by those cute tiger images you often see in Southeast Asia. Based on that I also made a first batch of stickers mostly distributed at FrOSCon 2022: Phosh stickers That's it for 2022. If you want to get involved into phosh testing, development or documentation then just drop by in the matrix room.

9 December 2022

Matthew Garrett: End-to-end encrypted messages need more than libsignal

(Disclaimer: I'm not a cryptographer, and I do not claim to be an expert in Signal. I've had this read over by a couple of people who are so with luck there's no egregious errors, but any mistakes here are mine)

There are indications that Twitter is working on end-to-end encrypted DMs, likely building on work that was done back in 2018. This made use of libsignal, the reference implementation of the protocol used by the Signal encrypted messaging app. There seems to be a fairly widespread perception that, since libsignal is widely deployed (it's also the basis for WhatsApp's e2e encryption) and open source and has been worked on by a whole bunch of cryptography experts, choosing to use libsignal means that 90% of the work has already been done. And in some ways this is true - the security of the protocol is probably just fine. But there's rather more to producing a secure and usable client than just sprinkling on some libsignal.

(Aside: To be clear, I have no reason to believe that the people who were working on this feature in 2018 were unaware of this. This thread kind of implies that the practical problems are why it didn't ship at the time. Given the reduction in Twitter's engineering headcount, and given the new leadership's espousal of political and social perspectives that don't line up terribly well with the bulk of the cryptography community, I have doubts that any implementation deployed in the near future will get all of these details right)

I was musing about this last night and someone pointed out some prior art. Bridgefy is a messaging app that uses Bluetooth as its transport layer, allowing messaging even in the absence of data services. The initial implementation involved a bunch of custom cryptography, enabling a range of attacks ranging from denial of service to extracting plaintext from encrypted messages. In response to criticism Bridgefy replaced their custom cryptographic protocol with libsignal, but that didn't fix everything. One issue is the potential for MITMing - keys are shared on first communication, but the client provided no mechanism to verify those keys, so a hostile actor could pretend to be a user, receive messages intended for that user, and then reencrypt them with the user's actual key. This isn't a weakness in libsignal, in the same way that the ability to add a custom certificate authority to a browser's trust store isn't a weakness in TLS. In Signal the app key distribution is all handled via Signal's servers, so if you're just using libsignal you need to implement the equivalent yourself.

The other issue was more subtle. libsignal has no awareness at all of the Bluetooth transport layer. Deciding where to send a message is up to the client, and these routing messages were spoofable. Any phone in the mesh could say "Send messages for Bob here", and other phones would do so. This should have been a denial of service at worst, since the messages for Bob would still be encrypted with Bob's key, so the attacker would be able to prevent Bob from receiving the messages but wouldn't be able to decrypt them. However, the code to decide where to send the message and the code to decide which key to encrypt the message with were separate, and the routing decision was made before the encryption key decision. An attacker could send a message saying "Route messages for Bob to me", and then another saying "Actually lol no I'm Mallory". If a message was sent between those two messages, the message intended for Bob would be delivered to Mallory's phone and encrypted with Mallory's key.

Again, this isn't a libsignal issue. libsignal encrypted the message using the key bundle it was told to encrypt it with, but the client code gave it a key bundle corresponding to the wrong user. A race condition in the client logic allowed messages intended for one person to be delivered to and readable by another.

This isn't the only case where client code has used libsignal poorly. The Bond Touch is a Bluetooth-connected bracelet that you wear. Tapping it or drawing gestures sends a signal to your phone, which culminates in a message being sent to someone else's phone which sends a signal to their bracelet, which then vibrates and glows in order to indicate a specific sentiment. The idea is that you can send brief indications of your feelings to someone you care about by simply tapping on your wrist, and they can know what you're thinking without having to interrupt whatever they're doing at the time. It's kind of sweet in a way that I'm not, but it also advertised "Private Spaces", a supposedly secure way to send chat messages and pictures, and that seemed more interesting. I grabbed the app and disassembled it, and found it was using libsignal. So I bought one and played with it, including dumping the traffic from the app. One important thing to realise is that libsignal is just the protocol library - it doesn't implement a server, and so you still need some way to get information between clients. And one of the bits of information you have to get between clients is the public key material.

Back when I played with this earlier this year, key distribution was implemented by uploading the public key to a database. The other end would download the public key, and everything works out fine. And this doesn't sound like a problem, given that the entire point of a public key is to be, well, public. Except that there was no access control on this database, and the filenames were simply phone numbers, so you could overwrite anyone's public key with one of your choosing. This didn't let you cause messages intended for them to be delivered to you, so exploiting this for anything other than a DoS would require another vulnerability somewhere, but there are contrived situations where this would potentially allow the privacy expectations to be broken.

Another issue with this app was its handling of one-time prekeys. When you send someone new a message via Signal, it's encrypted with a key derived from not only the recipient's identity key, but also from what's referred to as a "one-time prekey". Users generate a bunch of keypairs and upload the public half to the server. When you want to send a message to someone, you ask the server for one of their one-time prekeys and use that. Decrypting this message requires using the private half of the one-time prekey, and the recipient deletes it afterwards. This means that an attacker who intercepts a bunch of encrypted messages over the network and then later somehow obtains the long-term keys still won't be able to decrypt the messages, since they depended on keys that no longer exist. Since these one-time prekeys are only supposed to be used once (it's in the name!) there's a risk that they can all be consumed before they're replenished. The spec regarding pre-keys says that servers should consider rate-limiting this, but the protocol also supports falling back to just not using one-time prekeys if they're exhausted (you lose the forward secrecy benefits, but it's still end-to-end encrypted). This implementation not only implemented no rate-limiting, making it easy to exhaust the one-time prekeys, it then also failed to fall back to running without them. Another easy way to force DoS.

(And, remember, a successful DoS on an encrypted communications channel potentially results in the users falling back to an unencrypted communications channel instead. DoS may not break the encrypted protocol, but it may be sufficient to obtain plaintext anyway)

And finally, there's ClearSignal. I looked at this earlier this year - it's avoided many of these pitfalls by literally just being a modified version of the official Signal client and using the existing Signal servers (it's even interoperable with Actual Signal), but it's then got a bunch of other weirdness. The Signal database (I /think/ including the keys, but I haven't completely verified that) gets backed up to an AWS S3 bucket, identified using something derived from a key using KERI, and I've seen no external review of that whatsoever. So, who knows. It also has crash reporting enabled, and it's unclear how much internal state it sends on crashes, and it's also based on an extremely old version of Signal with the "You need to upgrade Signal" functionality disabled.

Three clients all using libsignal in one form or another, and three clients that do things wrong in ways that potentially have a privacy impact. Again, none of these issues are down to issues with libsignal, they're all in the code that surrounds it. And remember that Twitter probably has to worry about other issues as well! If I lose my phone I'm probably not going to worry too much about whether the messages sent through my weird bracelet app being gone forever, but losing all my Twitter DMs would be a significant change in behaviour from the status quo. But that's not an easy thing to do when you're not supposed to have access to any keys! Group chats? That's another significant problem to deal with. And making the messages readable through the web UI as well as on mobile means dealing with another set of key distribution issues. Get any of this wrong in one way and the user experience doesn't line up with expectations, get it wrong in another way and the worst case involves some of your users in countries with poor human rights records being executed.

Simply building something on top of libsignal doesn't mean it's secure. If you want meaningful functionality you need to build a lot of infrastructure around libsignal, and doing that well involves not just competent development and UX design, but also a strong understanding of security and cryptography. Given Twitter's lost most of their engineering and is led by someone who's alienated all the cryptographers I know, I wouldn't be optimistic.

comment count unavailable comments

Russell Coker: USB-PD and GaN

photo of 2 USB-PD chargers A recent development is cheap Gallium Nitride based power supplies that provide better efficiency in a smaller space than other technologies. Kogan recently had a special on such devices so I decided to try them out with my new Thinkpad X1 Carbon Gen 5 [1]. Google searches for power supplies for that Thinkpad included results for 30W PSUs which implies that any 30W USB-C PSU should work. I bought a 30W charger for $10 that can supply 15V/2A or 20V/1.5A on a single USB-C port or 15W on the USB-C port and 15W on the USB-2 port at the same time and expected it to work as a laptop charger. Unfortunately it didn t, I don t know whether the adverts for 30W Thinkpad PSUs were false or whether the claim of the GaN charger I bought being 30W was false, all I know is that the KDE power applet said that the PSU couldn t supply enough power. I then bought a 68W charger for $28 that can supply 20.0V/3.0A on a single USB-C port if the USB-2 port isn t used and 50W on the USB-C port if the USB-2 port is also being used. This worked well which wasn t a great surprise as I had previously run the laptop on 45W PSUs. If I connect a phone to the USB-2 port while the laptop is being charged then the laptop will be briefly cut off, presumably the voltage and current are being renegotiated when that happens. As you can see the 68W charger is significantly larger than the 30W charger, but still small enough to easily fit in a jacket pocket and smaller than a regular laptop charger. One of my uses for this will be to put it in a jacket pocket when I have my laptop in another pocket. Another use will be for charging in my car as the cables from the inverter to convert 12VDC to 240VAC takes enough space. I will probably get a ~50W USB-PD charger that connects to a car cigarette lighter socket when a GaN version of such a charger becomes available.

8 December 2022

Reproducible Builds: Reproducible Builds in November 2022

Welcome to yet another report from the Reproducible Builds project, this time for November 2022. In all of these reports (which we have been publishing regularly since May 2015) we attempt to outline the most important things that we have been up to over the past month. As always, if you interested in contributing to the project, please visit our Contribute page on our website.

Reproducible Builds Summit 2022 Following-up from last month s report about our recent summit in Venice, Italy, a comprehensive report from the meeting has not been finalised yet watch this space! As a very small preview, however, we can link to several issues that were filed about the website during the summit (#38, #39, #40, #41, #42, #43, etc.) and collectively learned about Software Bill of Materials (SBOM) s and how .buildinfo files can be seen/used as SBOMs. And, no less importantly, the Reproducible Builds t-shirt design has been updated

Reproducible Builds at European Cyber Week 2022 During the European Cyber Week 2022, a Capture The Flag (CTF) cybersecurity challenge was created by Fr d ric Pierret on the subject of Reproducible Builds. The challenge consisted in a pedagogical sense based on how to make a software release reproducible. To progress through the challenge issues that affect the reproducibility of build (such as build path, timestamps, file ordering, etc.) were to be fixed in steps in order to get the final flag in order to win the challenge. At the end of the competition, five people succeeded in solving the challenge, all of whom were awarded with a shirt. Fr d ric Pierret intends to create similar challenge in the form of a how to in the Reproducible Builds documentation, but two of the 2022 winners are shown here:

On business adoption and use of reproducible builds Simon Butler announced on the rb-general mailing list that the Software Quality Journal published an article called On business adoption and use of reproducible builds for open and closed source software. This article is an interview-based study which focuses on the adoption and uses of Reproducible Builds in industry, with a focus on investigating the reasons why organisations might not have adopted them:
[ ] industry application of R-Bs appears limited, and we seek to understand whether awareness is low or if significant technical and business reasons prevent wider adoption.
This is achieved through interviews with software practitioners and business managers, and touches on both the business and technical reasons supporting the adoption (or not) of Reproducible Builds. The article also begins with an excellent explanation and literature review, and even introduces a new helpful analogy for reproducible builds:
[Users are] able to perform a bitwise comparison of the two binaries to verify that they are identical and that the distributed binary is indeed built from the source code in the way the provider claims. Applied in this manner, R-Bs function as a canary, a mechanism that indicates when something might be wrong, and offer an improvement in security over running unverified binaries on computer systems.
The full paper is available to download on an open access basis. Elsewhere in academia, Beatriz Michelson Reichert and Rafael R. Obelheiro have published a paper proposing a systematic threat model for a generic software development pipeline identifying possible mitigations for each threat (PDF). Under the Tampering rubric of their paper, various attacks against Continuous Integration (CI) processes:
An attacker may insert a backdoor into a CI or build tool and thus introduce vulnerabilities into the software (resulting in an improper build). To avoid this threat, it is the developer s responsibility to take due care when making use of third-party build tools. Tampered compilers can be mitigated using diversity, as in the diverse double compiling (DDC) technique. Reproducible builds, a recent research topic, can also provide mitigation for this problem. (PDF)

Misc news
On our mailing list this month:

Debian & other Linux distributions Over 50 reviews of Debian packages were added this month, another 48 were updated and almost 30 were removed, all of which adds to our knowledge about identified issues. Two new issue types were added as well. [ ][ ]. Vagrant Cascadian announced on our mailing list another online sprint to help clear the huge backlog of reproducible builds patches submitted by performing NMUs (Non-Maintainer Uploads). The first such sprint took place on September 22nd, but others were held on October 6th and October 20th. There were two additional sprints that occurred in November, however, which resulted in the following progress: Lastly, Roland Clobus posted his latest update of the status of reproducible Debian ISO images on our mailing list. This reports that all major desktops build reproducibly with bullseye, bookworm and sid as well as that no custom patches needed to applied to Debian unstable for this result to occur. During November, however, Roland proposed some modifications to live-setup and the rebuild script has been adjusted to fix the failing Jenkins tests for Debian bullseye [ ][ ].
In other news, Miro Hron ok proposed a change to clamp build modification times to the value of SOURCE_DATE_EPOCH. This was initially suggested and discussed on a devel@ mailing list post but was later written up on the Fedora Wiki as well as being officially proposed to Fedora Engineering Steering Committee (FESCo).

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

diffoscope diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 226 and 227 to Debian:
  • Support both python3-progressbar and python3-progressbar2, two modules providing the progressbar Python module. [ ]
  • Don t run Python decompiling tests on Python bytecode that file(1) cannot detect yet and Python 3.11 cannot unmarshal. (#1024335)
  • Don t attempt to attach text-only differences notice if there are no differences to begin with. (#1024171)
  • Make sure we recommend apksigcopier. [ ]
  • Tidy generation of os_list. [ ]
  • Make the code clearer around generating the Debian substvars . [ ]
  • Use our assert_diff helper in test_lzip.py. [ ]
  • Drop other copyright notices from lzip.py and test_lzip.py. [ ]
In addition to this, Christopher Baines added lzip support [ ], and FC Stegerman added an optimisation whereby we don t run apktool if no differences are detected before the signing block [ ].
A significant number of changes were made to the Reproducible Builds website and documentation this month, including Chris Lamb ensuring the openEuler logo is correctly visible with a white background [ ], FC Stegerman de-duplicated by email address to avoid listing some contributors twice [ ], Herv Boutemy added Apache Maven to the list of affiliated projects [ ] and boyska updated our Contribute page to remark that the Reproducible Builds presence on salsa.debian.org is not just the Git repository but is also for creating issues [ ][ ]. In addition to all this, however, Holger Levsen made the following changes:
  • Add a number of existing publications [ ][ ] and update metadata for some existing publications as well [ ].
  • Hide draft posts on the website homepage. [ ]
  • Add the Warpforge build tool as a participating project of the summit. [ ]
  • Clarify in the footer that we welcome patches to the website repository. [ ]

Testing framework The Reproducible Builds project operates a comprehensive testing framework at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In October, the following changes were made by Holger Levsen:
  • Improve the generation of meta package sets (used in grouping packages for reporting/statistical purposes) to treat Debian bookworm as equivalent to Debian unstable in this specific case [ ] and to parse the list of packages used in the Debian cloud images [ ][ ][ ].
  • Temporarily allow Frederic to ssh(1) into our snapshot server as the jenkins user. [ ]
  • Keep some reproducible jobs Jenkins logs much longer [ ] (later reverted).
  • Improve the node health checks to detect failures to update the Debian cloud image package set [ ][ ] and to improve prioritisation of some kernel warnings [ ].
  • Always echo any IRC output to Jenkins output as well. [ ]
  • Deal gracefully with problems related to processing the cloud image package set. [ ]
Finally, Roland Clobus continued his work on testing Live Debian images, including adding support for specifying the origin of the Debian installer [ ] and to warn when the image has unmet dependencies in the package list (e.g. due to a transition) [ ].
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. You can get in touch with us via:

10 November 2022

Alastair McKinstry: Government approves T&Cs for first offshore wind auction under the ...

Government approves T&Cs for first offshore wind auction under the Renewable Electricity Support Scheme ORESS 1 expected to secure 2.5GW of electricity generating capacity #GreensInGovernment The Government has today approved the Terms and Conditions of ORESS 1, the first auction for offshore wind under the Renewable Electricity Support Scheme. This is a seminal moment in the delivery of offshore wind in Ireland. The offshore auction, the first in Ireland's history, is expected to provide a route to market for up to 2.5GW of offshore renewable energy to the Irish grid, enough to power 2.5 million Irish homes with clean electricity. Coming during the weeks of COP27, publishing details of the auction sends a strong international signal that Ireland is serious about offshore energy and our national climate targets and obligations. Recognising the critical role of local hosting communities in the development of this critical infrastructure, all offshore wind projects developed via ORESS will be required to make Community Benefit Fund contributions, from construction phase and continuing for the duration of the support period, typically for a total period of 25 years. This will result in lasting, tangible benefits for these communities. Speaking about this development, Minister Ryan said: The publication of these ORESS 1 Terms and Conditions is another massive step forward for offshore wind, for Irish climate leadership and towards Ireland s future as an international green energy hub. The first stage of this transformative auction will start before Christmas and it sets us on a path to powering many more of our homes and businesses from our own green energy resources over the coming years. It follows the enactment of the Maritime Area Planning Act last year, and the announcement regarding the awarding of Maritime Area Consents to Phase One projects last month. A final ORESS 1 auction calendar will be published by EirGrid shortly. The pre-qualification stage will launch next month (December). The qualification stage and the auction process will take place in the first half of 2023. Final auction results will be published by June 2023. Eligible projects Any project that has been awarded a Maritime Area Consent is eligible to partake in the ORESS 1 auction. Seven projects known as Relevant Projects were deemed ready to apply for Maritime Area Consents in Spring 2022. Once Maritime Area Consents are granted, these projects can not only compete for State support via the ORESS, but can also apply for planning permission from An Bord Plean la. Delivering on our broader targets ORESS 1 is expected to procure approximately 2.5GW of electricity generating capacity. Further auctions will be required to meet our renewable energy and climate ambitions. At least three offshore energy auctions are currently planned for this decade. The aim is to progress enough viable offshore projects through the consenting system to have competitive auctions. This will ultimately drive down cost for electricity consumers.

7 October 2022

Reproducible Builds: Reproducible Builds in September 2022

Welcome to the September 2022 report from the Reproducible Builds project! In our reports we try to outline the most important things that we have been up to over the past month. As a quick recap, whilst anyone may inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries. If you are interested in contributing to the project, please visit our Contribute page on our website.
David A. Wheeler reported to us that the US National Security Agency (NSA), Cybersecurity and Infrastructure Security Agency (CISA) and the Office of the Director of National Intelligence (ODNI) have released a document called Securing the Software Supply Chain: Recommended Practices Guide for Developers (PDF). As David remarked in his post to our mailing list, it expressly recommends having reproducible builds as part of advanced recommended mitigations . The publication of this document has been accompanied by a press release.
Holger Levsen was made aware of a small Microsoft project called oss-reproducible. Part of, OSSGadget, a larger collection of tools for analyzing open source packages , the purpose of oss-reproducible is to:
analyze open source packages for reproducibility. We start with an existing package (for example, the NPM left-pad package, version 1.3.0), and we try to answer the question, Do the package contents authentically reflect the purported source code?
More details can be found in the README.md file within the code repository.
David A. Wheeler also pointed out that there are some potential upcoming changes to the OpenSSF Best Practices badge for open source software in relation to reproducibility. Whilst the badge programme has three certification levels ( passing , silver and gold ), the gold level includes the criterion that The project MUST have a reproducible build . David reported that some projects have argued that this reproducibility criterion should be slightly relaxed as outlined in an issue on the best-practices-badge GitHub project. Essentially, though, the claim is that the reproducibility requirement doesn t make sense for projects that do not release built software, and that timestamp differences by themselves don t necessarily indicate malicious changes. Numerous pragmatic problems around excluding timestamps were raised in the discussion of the issue.
Sonatype, a pioneer of software supply chain management , issued a press release month to report that they had found:
[ ] a massive year-over-year increase in cyberattacks aimed at open source project ecosystems. According to early data from Sonatype s 8th annual State of the Software Supply Chain Report, which will be released in full this October, Sonatype has recorded an average 700% jump in repository attacks over the last three years.
More information is available in the press release.
A number of changes were made to the Reproducible Builds website and documentation this month, including Chris Lamb adding a redirect from /projects/ to /who/ in order to keep old or archived links working [ ], Jelle van der Waa added a Rust programming language example for SOURCE_DATE_EPOCH [ ][ ] and Mattia Rizzolo included Protocol Labs amongst our project-level sponsors [ ].

Debian There was a large amount of reproducibility work taking place within Debian this month:
  • The nfft source package was removed from the archive, and now all packages in Debian bookworm now have a corresponding .buildinfo file. This can be confirmed and tracked on the associated page on the tests.reproducible-builds.org site.
  • Vagrant Cascadian announced on our mailing list an informal online sprint to help clear the huge backlog of reproducible builds patches submitted by performing NMU (Non-Maintainer Uploads). The first such sprint took place on September 22nd with the following results:
    • Holger Levsen:
      • Mailed #1010957 in man-db asking for an update and whether to remove the patch tag for now. This was subsequently removed and the maintainer started to address the issue.
      • Uploaded gmp to DELAYED/15, fixing #1009931.
      • Emailed #1017372 in plymouth and asked for the maintainer s opinion on the patch. This resulted in the maintainer improving Vagrant s original patch (and uploading it) as well as filing an issue upstream.
      • Uploaded time to DELAYED/15, fixing #983202.
    • Vagrant Cascadian:
      • Verify and updated patch for mylvmbackup (#782318)
      • Verified/updated patches for libranlip. (#788000, #846975 & #1007137)
      • Uploaded libranlip to DELAYED/10.
      • Verified patch for cclive. (#824501)
      • Uploaded cclive to DELAYED/10.
      • Vagrant was unable to reproduce the underlying issue within #791423 (linuxtv-dvb-apps) and so the bug was marked as done .
      • Researched #794398 (in clhep).
    The plan is to repeat these sprints every two weeks, with the next taking place on Thursday October 6th at 16:00 UTC on the #debian-reproducible IRC channel.
  • Roland Clobus posted his 13th update of the status of reproducible Debian ISO images on our mailing list. During the last month, Roland ensured that the live images are now automatically fed to openQA for automated testing after they have been shown to be reproducible. Additionally Roland asked on the debian-devel mailing list about a way to determine the canonical timestamp of the Debian archive. [ ]
  • Following up on last month s work on reproducible bootstrapping, Holger Levsen filed two bugs against the debootstrap and cdebootstrap utilities. (#1019697 & #1019698)
Lastly, 44 reviews of Debian packages were added, 91 were updated and 17 were removed this month adding to our knowledge about identified issues. A number of issue types have been updated too, including the descriptions of cmake_rpath_contains_build_path [ ], nondeterministic_version_generated_by_python_param [ ] and timestamps_in_documentation_generated_by_org_mode [ ]. Furthermore, two new issue types were created: build_path_used_to_determine_version_or_package_name [ ] and captures_build_path_via_cmake_variables [ ].

Other distributions In openSUSE, Bernhard M. Wiedemann published his usual openSUSE monthly report.

diffoscope diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 222 and 223 to Debian, as well as made the following changes:
  • The cbfstools utility is now provided in Debian via the coreboot-utils package so we can enable that functionality within Debian. [ ]
  • Looked into Mach-O support.
  • Fixed the try.diffoscope.org service by addressing a compatibility issue between glibc/seccomp that was preventing the Docker-contained diffoscope instance from spawning any external processes whatsoever [ ]. I also updated the requirements.txt file, as some of the specified packages were no longer available [ ][ ].
In addition Jelle van der Waa added support for file version 5.43 [ ] and Mattia Rizzolo updated the packaging:
  • Also include coreboot-utils in the Build-Depends and Test-Depends fields so that it is available for tests. [ ]
  • Use pep517 and pip to load the requirements. [ ]
  • Remove packages in Breaks/Replaces that have been obsoleted since the release of Debian bullseye. [ ]

Reprotest reprotest is our end-user tool to build the same source code twice in widely and deliberate different environments, and checking whether the binaries produced by the builds have any differences. This month, reprotest version 0.7.22 was uploaded to Debian unstable by Holger Levsen, which included the following changes by Philip Hands:
  • Actually ensure that the setarch(8) utility can actually execute before including an architecture to test. [ ]
  • Include all files matching *.*deb in the default artifact_pattern in order to archive all results of the build. [ ]
  • Emit an error when building the Debian package if the Debian packaging version does not patch the Python version of reprotest. [ ]
  • Remove an unneeded invocation of the head(1) utility. [ ]

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Testing framework The Reproducible Builds project runs a significant testing framework at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. This month, however, the following changes were made:
  • Holger Levsen:
    • Add a job to build reprotest from Git [ ] and use the correct Git branch when building it [ ].
  • Mattia Rizzolo:
    • Enable syncing of results from building live Debian ISO images. [ ]
    • Use scp -p in order to preserve modification times when syncing live ISO images. [ ]
    • Apply the shellcheck shell script analysis tool. [ ]
    • In a build node wrapper script, remove some debugging code which was messing up calling scp(1) correctly [ ] and consquently add support to use both scp -p and regular scp [ ].
  • Roland Clobus:
    • Track and handle the case where the Debian archive gets updated between two live image builds. [ ]
    • Remove a call to sudo(1) as it is not (or no longer) required to delete old live-build results. [ ]

Contact As ever, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

4 October 2022

Russ Allbery: Review: The Dragon Never Sleeps

Review: The Dragon Never Sleeps, by Glen Cook
Publisher: Night Shade Books
Copyright: 1988
Printing: 2008
ISBN: 1-59780-099-6
Format: MOBI
Pages: 449
Canon Space is run, in a way, by the noble mercantile houses, who spread their cities, colonies, and mines through the mysterious Web that allows faster-than-light interstellar travel. The true rulers of Canon Space, though, are the Guardships: enormous, undefeatable starships run by crews who have become effectively immortal by repeated uploading and reincarnation. Or, in the case of the Deified, without reincarnation, observing, ordering, advising, and meddling from an entirely virtual existence. The Guardships have enforced the status quo for four thousand years. House Tregesser thinks they have the means to break the stranglehold of the Guardships. They have contact with Outsiders from beyond Canon Space who can provide advanced technology. They have their own cloning technology, which they use to create backup copies of their elites. And they have Lupo Provik, a quietly brilliant schemer who has devoted his life to destroying Guardships. This book was so bad. A more sensible person than I would have given up after the first hundred pages, but I was too stubborn. The stubbornness did not pay off. Sometimes I pick up an older SFF novel and I'm reminded of how much the quality bar in the field has been raised over the past twenty years. It's not entirely fair to treat The Dragon Never Sleeps as typical of 1980s science fiction: Cook is far better known for his Black Company military fantasy series, this is one of his minor novels, and it's only been intermittently in print. But I'm dubious this would have been published at all today. First, the writing is awful. It's choppy, cliched, awkward, and has no rhythm or sense of beauty. Here's a nearly random paragraph near the beginning of the book as a sample:
He hurled thunders and lightnings with renewed fury. The whole damned universe was out to frustrate him. XII Fulminata! What the hell? Was some malign force ranged against him? That was his most secret fear. That somehow someone or something was using him the way he used so many others.
(Yes, this is one of the main characters throwing a temper tantrum with a special effects machine.) In a book of 450 pages, there are 151 chapters, and most chapters switch viewpoint characters. Most of them also end with a question or some vaguely foreboding sentence to try to build tension, and while I'm willing to admit that sometimes works when used sparingly, every three pages is not sparingly. This book is also weirdly empty of description for its size. We get a few set pieces, a few battles, and a sentence or two of physical description of most characters when they're first introduced, but it's astonishing how little of a mental image I had of this universe after reading the whole book. Cook probably describes a Guardship at some point in this book, but if he does, it completely failed to stick in my memory. There are aliens that everyone recognizes as aliens, so presumably they look different than humans, but for most of them I have no idea how. Very belatedly we're told one important species (which never gets a name) has a distinctive smell. That's about it. Instead, nearly the whole book is dialogue and scheming. It's clear that Cook is intending to write a story of schemes and counter-schemes and jousting between brilliant characters. This can work if the dialogue is sufficiently sharp and snappy to carry the story. It is not.
"What mischief have you been up to, Kez Maefele?" "Staying alive in a hostile universe." "You've had more than your share of luck." "Perhaps luck had nothing to do with it, WarAvocat. Till now." "Luck has run out. The Ku Question has run its course. The symbol is about to receive its final blow."
There are hundreds of pages of this sort of thing. The setting is at least intriguing, if not stunningly original. There are immortal warships oppressing human space, mysterious Outsiders, great house politics, and an essentially immortal alien warrior who ends up carrying most of the story. That's material for a good space opera if the reader slowly learns the shape of the universe, its history, and its landmarks and political factions. Or the author can decline to explain any of that. I suppose that's also a choice. Here are some things that you may have been curious about after reading my summary, and which I'm still curious about after having finished the book: What laws do the Guardships impose and what's the philosophy behind those laws? How does the economic system work? Who built the Guardships originally, and how? How do the humans outside of Canon Space live? Who are the Ku? Why did they start fighting the humans? How many other aliens are there? What do they think of this? How does the Canon government work? How have the Guardships remained technologically superior for four thousand years? Even where the reader gets a partial explanation, such as what Web is and how it was built, it's an unimportant aside that's largely devoid of a sense of wonder. The one piece of world-building that this book is interested in is the individual Guardships and the different ways in which they've handled millennia of self-contained patrol, and even there we only get to see a few of them. There is a plot with appropriately epic scope, but even that is undermined by the odd pacing. Five, ten, or fifty years sometimes goes by in a sentence. A war starts, with apparently enormous implications for Canon Space, and then we learn that it continues for years without warranting narrative comment. This is done without transitions and without signposts for the reader; it's just another sentence in the narration, mixed in with the rhetorical questions and clumsy foreshadowing. I would like to tell you that at least the book has a satisfying ending that resolves the plot conflict that it finally reveals to the reader, but I had a hard time understanding why the ending even mattered. The plot was so difficult to follow that I'm sure I missed something, but it's not difficult to follow in the fun way that would make me want to re-read it. It's difficult to follow because Cook doesn't seem able to explain the plot in his head to the reader in any coherent form. I think the status quo was slightly disrupted? Maybe? Also, I no longer care. Oh, and there's a gene-engineered sex slave in this book, who various male characters are very protective and possessive of, who never develops much of a personality, and who has no noticeable impact on the plot despite being a major character. Yay. This was one of the worst books I've read in a long time. In retrospect, it was an awful place to start with Glen Cook. Hopefully his better-known works are also better-written, but I can't say I feel that inspired to find out. Rating: 2 out of 10

30 August 2022

John Goerzen: The PC & Internet Revolution in Rural America

Inspired by several others (such as Alex Schroeder s post and Szcze uja s prompt), as well as a desire to get this down for my kids, I figure it s time to write a bit about living through the PC and Internet revolution where I did: outside a tiny town in rural Kansas. And, as I ve been back in that same area for the past 15 years, I reflect some on the challenges that continue to play out. Although the stories from the others were primarily about getting online, I want to start by setting some background. Those of you that didn t grow up in the same era as I did probably never realized that a typical business PC setup might cost $10,000 in today s dollars, for instance. So let me start with the background.

Nothing was easy This story begins in the 1980s. Somewhere around my Kindergarten year of school, around 1985, my parents bought a TRS-80 Color Computer 2 (aka CoCo II). It had 64K of RAM and used a TV for display and sound. This got you the computer. It didn t get you any disk drive or anything, no joysticks (required by a number of games). So whenever the system powered down, or it hung and you had to power cycle it a frequent event you d lose whatever you were doing and would have to re-enter the program, literally by typing it in. The floppy drive for the CoCo II cost more than the computer, and it was quite common for people to buy the computer first and then the floppy drive later when they d saved up the money for that. I particularly want to mention that computers then didn t come with a modem. What would be like buying a laptop or a tablet without wifi today. A modem, which I ll talk about in a bit, was another expensive accessory. To cobble together a system in the 80s that was capable of talking to others with persistent storage (floppy, or hard drive), screen, keyboard, and modem would be quite expensive. Adjusted for inflation, if you re talking a PC-style device (a clone of the IBM PC that ran DOS), this would easily be more expensive than the Macbook Pros of today. Few people back in the 80s had a computer at home. And the portion of those that had even the capability to get online in a meaningful way was even smaller. Eventually my parents bought a PC clone with 640K RAM and dual floppy drives. This was primarily used for my mom s work, but I did my best to take it over whenever possible. It ran DOS and, despite its monochrome screen, was generally a more capable machine than the CoCo II. For instance, it supported lowercase. (I m not even kidding; the CoCo II pretty much didn t.) A while later, they purchased a 32MB hard drive for it what luxury! Just getting a machine to work wasn t easy. Say you d bought a PC, and then bought a hard drive, and a modem. You didn t just plug in the hard drive and it would work. You would have to fight it every step of the way. The BIOS and DOS partition tables of the day used a cylinder/head/sector method of addressing the drive, and various parts of that those addresses had too few bits to work with the big drives of the day above 20MB. So you would have to lie to the BIOS and fdisk in various ways, and sort of work out how to do it for each drive. For each peripheral serial port, sound card (in later years), etc., you d have to set jumpers for DMA and IRQs, hoping not to conflict with anything already in the system. Perhaps you can now start to see why USB and PCI were so welcomed.

Sharing and finding resources Despite the two computers in our home, it wasn t as if software written on one machine just ran on another. A lot of software for PC clones assumed a CGA color display. The monochrome HGC in our PC wasn t particularly compatible. You could find a TSR program to emulate the CGA on the HGC, but it wasn t particularly stable, and there s only so much you can do when a program that assumes color displays on a monitor that can only show black, dark amber, or light amber. So I d periodically get to use other computers most commonly at an office in the evening when it wasn t being used. There were some local computer clubs that my dad took me to periodically. Software was swapped back then; disks copied, shareware exchanged, and so forth. For me, at least, there was no online to download software from, and selling software over the Internet wasn t a thing at all.

Three Different Worlds There were sort of three different worlds of computing experience in the 80s:
  1. Home users. Initially using a wide variety of software from Apple, Commodore, Tandy/RadioShack, etc., but eventually coming to be mostly dominated by IBM PC clones
  2. Small and mid-sized business users. Some of them had larger minicomputers or small mainframes, but most that I had contact with by the early 90s were standardized on DOS-based PCs. More advanced ones had a network running Netware, most commonly. Networking hardware and software was generally too expensive for home users to use in the early days.
  3. Universities and large institutions. These are the places that had the mainframes, the earliest implementations of TCP/IP, the earliest users of UUCP, and so forth.
The difference between the home computing experience and the large institution experience were vast. Not only in terms of dollars the large institution hardware could easily cost anywhere from tens of thousands to millions of dollars but also in terms of sheer resources required (large rooms, enormous power circuits, support staff, etc). Nothing was in common between them; not operating systems, not software, not experience. I was never much aware of the third category until the differences started to collapse in the mid-90s, and even then I only was exposed to it once the collapse was well underway. You might say to me, Well, Google certainly isn t running what I m running at home! And, yes of course, it s different. But fundamentally, most large datacenters are running on x86_64 hardware, with Linux as the operating system, and a TCP/IP network. It s a different scale, obviously, but at a fundamental level, the hardware and operating system stack are pretty similar to what you can readily run at home. Back in the 80s and 90s, this wasn t the case. TCP/IP wasn t even available for DOS or Windows until much later, and when it was, it was a clunky beast that was difficult. One of the things Kevin Driscoll highlights in his book called Modem World see my short post about it is that the history of the Internet we usually receive is focused on case 3: the large institutions. In reality, the Internet was and is literally a network of networks. Gateways to and from Internet existed from all three kinds of users for years, and while TCP/IP ultimately won the battle of the internetworking protocol, the other two streams of users also shaped the Internet as we now know it. Like many, I had no access to the large institution networks, but as I ve been reflecting on my experiences, I ve found a new appreciation for the way that those of us that grew up with primarily home PCs shaped the evolution of today s online world also.

An Era of Scarcity I should take a moment to comment about the cost of software back then. A newspaper article from 1985 comments that WordPerfect, then the most powerful word processing program, sold for $495 (or $219 if you could score a mail order discount). That s $1360/$600 in 2022 money. Other popular software, such as Lotus 1-2-3, was up there as well. If you were to buy a new PC clone in the mid to late 80s, it would often cost $2000 in 1980s dollars. Now add a printer a low-end dot matrix for $300 or a laser for $1500 or even more. A modem: another $300. So the basic system would be $3600, or $9900 in 2022 dollars. If you wanted a nice printer, you re now pushing well over $10,000 in 2022 dollars. You start to see one barrier here, and also why things like shareware and piracy if it was indeed even recognized as such were common in those days. So you can see, from a home computer setup (TRS-80, Commodore C64, Apple ][, etc) to a business-class PC setup was an order of magnitude increase in cost. From there to the high-end minis/mainframes was another order of magnitude (at least!) increase. Eventually there was price pressure on the higher end and things all got better, which is probably why the non-DOS PCs lasted until the early 90s.

Increasing Capabilities My first exposure to computers in school was in the 4th grade, when I would have been about 9. There was a single Apple ][ machine in that room. I primarily remember playing Oregon Trail on it. The next year, the school added a computer lab. Remember, this is a small rural area, so each graduating class might have about 25 people in it; this lab was shared by everyone in the K-8 building. It was full of some flavor of IBM PS/2 machines running DOS and Netware. There was a dedicated computer teacher too, though I think she was a regular teacher that was given somewhat minimal training on computers. We were going to learn typing that year, but I did so well on the very first typing program that we soon worked out that I could do programming instead. I started going to school early these machines were far more powerful than the XT at home and worked on programming projects there. Eventually my parents bought me a Gateway 486SX/25 with a VGA monitor and hard drive. Wow! This was a whole different world. It may have come with Windows 3.0 or 3.1 on it, but I mainly remember running OS/2 on that machine. More on that below.

Programming That CoCo II came with a BASIC interpreter in ROM. It came with a large manual, which served as a BASIC tutorial as well. The BASIC interpreter was also the shell, so literally you could not use the computer without at least a bit of BASIC. Once I had access to a DOS machine, it also had a basic interpreter: GW-BASIC. There was a fair bit of software written in BASIC at the time, but most of the more advanced software wasn t. I wondered how these .EXE and .COM programs were written. I could find vague references to DEBUG.EXE, assemblers, and such. But it wasn t until I got a copy of Turbo Pascal that I was able to do that sort of thing myself. Eventually I got Borland C++ and taught myself C as well. A few years later, I wanted to try writing GUI programs for Windows, and bought Watcom C++ much cheaper than the competition, and it could target Windows, DOS (and I think even OS/2). Notice that, aside from BASIC, none of this was free, and none of it was bundled. You couldn t just download a C compiler, or Python interpreter, or whatnot back then. You had to pay for the ability to write any kind of serious code on the computer you already owned.

The Microsoft Domination Microsoft came to dominate the PC landscape, and then even the computing landscape as a whole. IBM very quickly lost control over the hardware side of PCs as Compaq and others made clones, but Microsoft has managed in varying degrees even to this day to keep a stranglehold on the software, and especially the operating system, side. Yes, there was occasional talk of things like DR-DOS, but by and large the dominant platform came to be the PC, and if you had a PC, you ran DOS (and later Windows) from Microsoft. For awhile, it looked like IBM was going to challenge Microsoft on the operating system front; they had OS/2, and when I switched to it sometime around the version 2.1 era in 1993, it was unquestionably more advanced technically than the consumer-grade Windows from Microsoft at the time. It had Internet support baked in, could run most DOS and Windows programs, and had introduced a replacement for the by-then terrible FAT filesystem: HPFS, in 1988. Microsoft wouldn t introduce a better filesystem for its consumer operating systems until Windows XP in 2001, 13 years later. But more on that story later.

Free Software, Shareware, and Commercial Software I ve covered the high cost of software already. Obviously $500 software wasn t going to sell in the home market. So what did we have? Mainly, these things:
  1. Public domain software. It was free to use, and if implemented in BASIC, probably had source code with it too.
  2. Shareware
  3. Commercial software (some of it from small publishers was a lot cheaper than $500)
Let s talk about shareware. The idea with shareware was that a company would release a useful program, sometimes limited. You were encouraged to register , or pay for, it if you liked it and used it. And, regardless of whether you registered it or not, were told please copy! Sometimes shareware was fully functional, and registering it got you nothing more than printed manuals and an easy conscience (guilt trips for not registering weren t necessarily very subtle). Sometimes unregistered shareware would have a nag screen a delay of a few seconds while they told you to register. Sometimes they d be limited in some way; you d get more features if you registered. With games, it was popular to have a trilogy, and release the first episode inevitably ending with a cliffhanger as shareware, and the subsequent episodes would require registration. In any event, a lot of software people used in the 80s and 90s was shareware. Also pirated commercial software, though in the earlier days of computing, I think some people didn t even know the difference. Notice what s missing: Free Software / FLOSS in the Richard Stallman sense of the word. Stallman lived in the big institution world after all, he worked at MIT and what he was doing with the Free Software Foundation and GNU project beginning in 1983 never really filtered into the DOS/Windows world at the time. I had no awareness of it even existing until into the 90s, when I first started getting some hints of it as a port of gcc became available for OS/2. The Internet was what really brought this home, but I m getting ahead of myself. I want to say again: FLOSS never really entered the DOS and Windows 3.x ecosystems. You d see it make a few inroads here and there in later versions of Windows, and moreso now that Microsoft has been sort of forced to accept it, but still, reflect on its legacy. What is the software market like in Windows compared to Linux, even today? Now it is, finally, time to talk about connectivity!

Getting On-Line What does it even mean to get on line? Certainly not connecting to a wifi access point. The answer is, unsurprisingly, complex. But for everyone except the large institutional users, it begins with a telephone.

The telephone system By the 80s, there was one communication network that already reached into nearly every home in America: the phone system. Virtually every household (note I don t say every person) was uniquely identified by a 10-digit phone number. You could, at least in theory, call up virtually any other phone in the country and be connected in less than a minute. But I ve got to talk about cost. The way things worked in the USA, you paid a monthly fee for a phone line. Included in that monthly fee was unlimited local calling. What is a local call? That was an extremely complex question. Generally it meant, roughly, calling within your city. But of course, as you deal with things like suburbs and cities growing into each other (eg, the Dallas-Ft. Worth metroplex), things got complicated fast. But let s just say for simplicity you could call others in your city. What about calling people not in your city? That was long distance , and you paid often hugely by the minute for it. Long distance rates were difficult to figure out, but were generally most expensive during business hours and cheapest at night or on weekends. Prices eventually started to come down when competition was introduced for long distance carriers, but even then you often were stuck with a single carrier for long distance calls outside your city but within your state. Anyhow, let s just leave it at this: local calls were virtually free, and long distance calls were extremely expensive.

Getting a modem I remember getting a modem that ran at either 1200bps or 2400bps. Either way, quite slow; you could often read even plain text faster than the modem could display it. But what was a modem? A modem hooked up to a computer with a serial cable, and to the phone system. By the time I got one, modems could automatically dial and answer. You would send a command like ATDT5551212 and it would dial 555-1212. Modems had speakers, because often things wouldn t work right, and the telephone system was oriented around speech, so you could hear what was happening. You d hear it wait for dial tone, then dial, then hopefully the remote end would ring, a modem there would answer, you d hear the screeching of a handshake, and eventually your terminal would say CONNECT 2400. Now your computer was bridged to the other; anything going out your serial port was encoded as sound by your modem and decoded at the other end, and vice-versa. But what, exactly, was the other end? It might have been another person at their computer. Turn on local echo, and you can see what they did. Maybe you d send files to each other. But in my case, the answer was different: PC Magazine.

PC Magazine and CompuServe Starting around 1986 (so I would have been about 6 years old), I got to read PC Magazine. My dad would bring copies that were being discarded at his office home for me to read, and I think eventually bought me a subscription directly. This was not just a standard magazine; it ran something like 350-400 pages an issue, and came out every other week. This thing was a monster. It had reviews of hardware and software, descriptions of upcoming technologies, pages and pages of ads (that often had some degree of being informative to them). And they had sections on programming. Many issues would talk about BASIC or Pascal programming, and there d be a utility in most issues. What do I mean by a utility in most issues ? Did they include a floppy disk with software? No, of course not. There was a literal program listing printed in the magazine. If you wanted the utility, you had to type it in. And a lot of them were written in assembler, so you had to have an assembler. An assembler, of course, was not free and I didn t have one. Or maybe they wrote it in Microsoft C, and I had Borland C, and (of course) they weren t compatible. Sometimes they would list the program sort of in binary: line after line of a BASIC program, with lines like 64, 193, 253, 0, 53, 0, 87 that you would type in for hours, hopefully correctly. Running the BASIC program would, if you got it correct, emit a .COM file that you could then run. They did have a rudimentary checksum system built in, but it wasn t even a CRC, so something like swapping two numbers you d never notice except when the program would mysteriously hang. Eventually they teamed up with CompuServe to offer a limited slice of CompuServe for the purpose of downloading PC Magazine utilities. This was called PC MagNet. I am foggy on the details, but I believe that for a time you could connect to the limited PC MagNet part of CompuServe for free (after the cost of the long-distance call, that is) rather than paying for CompuServe itself (because, OF COURSE, that also charged you per the minute.) So in the early days, I would get special permission from my parents to place a long distance call, and after some nerve-wracking minutes in which we were aware every minute was racking up charges, I could navigate the menus, download what I wanted, and log off immediately. I still, incidentally, mourn what PC Magazine became. As with computing generally, it followed the mass market. It lost its deep technical chops, cut its programming columns, stopped talking about things like how SCSI worked, and so forth. By the time it stopped printing in 2009, it was no longer a square-bound 400-page beheamoth, but rather looked more like a copy of Newsweek, but with less depth.

Continuing with CompuServe CompuServe was a much larger service than just PC MagNet. Eventually, our family got a subscription. It was still an expensive and scarce resource; I d call it only after hours when the long-distance rates were cheapest. Everyone had a numerical username separated by commas; mine was 71510,1421. CompuServe had forums, and files. Eventually I would use TapCIS to queue up things I wanted to do offline, to minimize phone usage online. CompuServe eventually added a gateway to the Internet. For the sum of somewhere around $1 a message, you could send or receive an email from someone with an Internet email address! I remember the thrill of one time, as a kid of probably 11 years, sending a message to one of the editors of PC Magazine and getting a kind, if brief, reply back! But inevitably I had

The Godzilla Phone Bill Yes, one month I became lax in tracking my time online. I ran up my parents phone bill. I don t remember how high, but I remember it was hundreds of dollars, a hefty sum at the time. As I watched Jason Scott s BBS Documentary, I realized how common an experience this was. I think this was the end of CompuServe for me for awhile.

Toll-Free Numbers I lived near a town with a population of 500. Not even IN town, but near town. The calling area included another town with a population of maybe 1500, so all told, there were maybe 2000 people total I could talk to with a local call though far fewer numbers, because remember, telephones were allocated by the household. There was, as far as I know, zero modems that were a local call (aside from one that belonged to a friend I met in around 1992). So basically everything was long-distance. But there was a special feature of the telephone network: toll-free numbers. Normally when calling long-distance, you, the caller, paid the bill. But with a toll-free number, beginning with 1-800, the recipient paid the bill. These numbers almost inevitably belonged to corporations that wanted to make it easy for people to call. Sales and ordering lines, for instance. Some of these companies started to set up modems on toll-free numbers. There were few of these, but they existed, so of course I had to try them! One of them was a company called PennyWise that sold office supplies. They had a toll-free line you could call with a modem to order stuff. Yes, online ordering before the web! I loved office supplies. And, because I lived far from a big city, if the local K-Mart didn t have it, I probably couldn t get it. Of course, the interface was entirely text, but you could search for products and place orders with the modem. I had loads of fun exploring the system, and actually ordered things from them and probably actually saved money doing so. With the first order they shipped a monster full-color catalog. That thing must have been 500 pages, like the Sears catalogs of the day. Every item had a part number, which streamlined ordering through the modem.

Inbound FAXes By the 90s, a number of modems became able to send and receive FAXes as well. For those that don t know, a FAX machine was essentially a special modem. It would scan a page and digitally transmit it over the phone system, where it would at least in the early days be printed out in real time (because the machines didn t have the memory to store an entire page as an image). Eventually, PC modems integrated FAX capabilities. There still wasn t anything useful I could do locally, but there were ways I could get other companies to FAX something to me. I remember two of them. One was for US Robotics. They had an on demand FAX system. You d call up a toll-free number, which was an automated IVR system. You could navigate through it and select various documents of interest to you: spec sheets and the like. You d key in your FAX number, hang up, and US Robotics would call YOU and FAX you the documents you wanted. Yes! I was talking to a computer (of a sorts) at no cost to me! The New York Times also ran a service for awhile called TimesFax. Every day, they would FAX out a page or two of summaries of the day s top stories. This was pretty cool in an era in which I had no other way to access anything from the New York Times. I managed to sign up for TimesFax I have no idea how, anymore and for awhile I would get a daily FAX of their top stories. When my family got its first laser printer, I could them even print these FAXes complete with the gothic New York Times masthead. Wow! (OK, so technically I could print it on a dot-matrix printer also, but graphics on a 9-pin dot matrix is a kind of pain that is a whole other article.)

My own phone line Remember how I discussed that phone lines were allocated per household? This was a problem for a lot of reasons:
  1. Anybody that tried to call my family while I was using my modem would get a busy signal (unable to complete the call)
  2. If anybody in the house picked up the phone while I was using it, that would degrade the quality of the ongoing call and either mess up or disconnect the call in progress. In many cases, that could cancel a file transfer (which wasn t necessarily easy or possible to resume), prompting howls of annoyance from me.
  3. Generally we all had to work around each other
So eventually I found various small jobs and used the money I made to pay for my own phone line and my own long distance costs. Eventually I upgraded to a 28.8Kbps US Robotics Courier modem even! Yes, you heard it right: I got a job and a bank account so I could have a phone line and a faster modem. Uh, isn t that why every teenager gets a job? Now my local friend and I could call each other freely at least on my end (I can t remember if he had his own phone line too). We could exchange files using HS/Link, which had the added benefit of allowing split-screen chat even while a file transfer is in progress. I m sure we spent hours chatting to each other keyboard-to-keyboard while sharing files with each other.

Technology in Schools By this point in the story, we re in the late 80s and early 90s. I m still using PC-style OSs at home; OS/2 in the later years of this period, DOS or maybe a bit of Windows in the earlier years. I mentioned that they let me work on programming at school starting in 5th grade. It was soon apparent that I knew more about computers than anybody on staff, and I started getting pulled out of class to help teachers or administrators with vexing school problems. This continued until I graduated from high school, incidentally often to my enjoyment, and the annoyance of one particular teacher who, I must say, I was fine with annoying in this way. That s not to say that there was institutional support for what I was doing. It was, after all, a small school. Larger schools might have introduced BASIC or maybe Logo in high school. But I had already taught myself BASIC, Pascal, and C by the time I was somewhere around 12 years old. So I wouldn t have had any use for that anyhow. There were programming contests occasionally held in the area. Schools would send teams. My school didn t really send anybody, but I went as an individual. One of them was run by a local college (but for jr. high or high school students. Years later, I met one of the professors that ran it. He remembered me, and that day, better than I did. The programming contest had problems one could solve in BASIC or Logo. I knew nothing about what to expect going into it, but I had lugged my computer and screen along, and asked him, Can I write my solutions in C? He was, apparently, stunned, but said sure, go for it. I took first place that day, leading to some rather confused teams from much larger schools. The Netware network that the school had was, as these generally were, itself isolated. There was no link to the Internet or anything like it. Several schools across three local counties eventually invested in a fiber-optic network linking them together. This built a larger, but still closed, network. Its primary purpose was to allow students to be exposed to a wider variety of classes at high schools. Participating schools had an ITV room , outfitted with cameras and mics. So students at any school could take classes offered over ITV at other schools. For instance, only my school taught German classes, so people at any of those participating schools could take German. It was an early Zoom room. But alongside the TV signal, there was enough bandwidth to run some Netware frames. By about 1995 or so, this let one of the schools purchase some CD-ROM software that was made available on a file server and could be accessed by any participating school. Nice! But Netware was mainly about file and printer sharing; there wasn t even a facility like email, at least not on our deployment.

BBSs My last hop before the Internet was the BBS. A BBS was a computer program, usually ran by a hobbyist like me, on a computer with a modem connected. Callers would call it up, and they d interact with the BBS. Most BBSs had discussion groups like forums and file areas. Some also had games. I, of course, continued to have that most vexing of problems: they were all long-distance. There were some ways to help with that, chiefly QWK and BlueWave. These, somewhat like TapCIS in the CompuServe days, let me download new message posts for reading offline, and queue up my own messages to send later. QWK and BlueWave didn t help with file downloading, though.

BBSs get networked BBSs were an interesting thing. You d call up one, and inevitably somewhere in the file area would be a BBS list. Download the BBS list and you ve suddenly got a list of phone numbers to try calling. All of them were long distance, of course. You d try calling them at random and have a success rate of maybe 20%. The other 80% would be defunct; you might get the dreaded this number is no longer in service or the even more dreaded angry human answering the phone (and of course a modem can t talk to a human, so they d just get silence for probably the nth time that week). The phone company cared nothing about BBSs and recycled their numbers just as fast as any others. To talk to various people, or participate in certain discussion groups, you d have to call specific BBSs. That s annoying enough in the general case, but even more so for someone paying long distance for it all, because it takes a few minutes to establish a connection to a BBS: handshaking, logging in, menu navigation, etc. But BBSs started talking to each other. The earliest successful such effort was FidoNet, and for the duration of the BBS era, it remained by far the largest. FidoNet was analogous to the UUCP that the institutional users had, but ran on the much cheaper PC hardware. Basically, BBSs that participated in FidoNet would relay email, forum posts, and files between themselves overnight. Eventually, as with UUCP, by hopping through this network, messages could reach around the globe, and forums could have worldwide participation asynchronously, long before they could link to each other directly via the Internet. It was almost entirely volunteer-run.

Running my own BBS At age 13, I eventually chose to set up my own BBS. It ran on my single phone line, so of course when I was dialing up something else, nobody could dial up me. Not that this was a huge problem; in my town of 500, I probably had a good 1 or 2 regular callers in the beginning. In the PC era, there was a big difference between a server and a client. Server-class software was expensive and rare. Maybe in later years you had an email client, but an email server would be completely unavailable to you as a home user. But with a BBS, I could effectively run a server. I even ran serial lines in our house so that the BBS could be connected from other rooms! Since I was running OS/2, the BBS didn t tie up the computer; I could continue using it for other things. FidoNet had an Internet email gateway. This one, unlike CompuServe s, was free. Once I had a BBS on FidoNet, you could reach me from the Internet using the FidoNet address. This didn t support attachments, but then email of the day didn t really, either. Various others outside Kansas ran FidoNet distribution points. I believe one of them was mgmtsys; my memory is quite vague, but I think they offered a direct gateway and I would call them to pick up Internet mail via FidoNet protocols, but I m not at all certain of this.

Pros and Cons of the Non-Microsoft World As mentioned, Microsoft was and is the dominant operating system vendor for PCs. But I left that world in 1993, and here, nearly 30 years later, have never really returned. I got an operating system with more technical capabilities than the DOS and Windows of the day, but the tradeoff was a much smaller software ecosystem. OS/2 could run DOS programs, but it ran OS/2 programs a lot better. So if I were to run a BBS, I wanted one that had a native OS/2 version limiting me to a small fraction of available BBS server software. On the other hand, as a fully 32-bit operating system, there started to be OS/2 ports of certain software with a Unix heritage; most notably for me at the time, gcc. At some point, I eventually came across the RMS essays and started to be hooked.

Internet: The Hunt Begins I certainly was aware that the Internet was out there and interesting. But the first problem was: how the heck do I get connected to the Internet?

Computer labs There was one place that tended to have Internet access: colleges and universities. In 7th grade, I participated in a program that resulted in me being invited to visit Duke University, and in 8th grade, I participated in National History Day, resulting in a trip to visit the University of Maryland. I probably sought out computer labs at both of those. My most distinct memory was finding my way into a computer lab at one of those universities, and it was full of NeXT workstations. I had never seen or used NeXT before, and had no idea how to operate it. I had brought a box of floppy disks, unaware that the DOS disks probably weren t compatible with NeXT. Closer to home, a small college had a computer lab that I could also visit. I would go there in summer or when it wasn t used with my stack of floppies. I remember downloading disk images of FLOSS operating systems: FreeBSD, Slackware, or Debian, at the time. The hash marks from the DOS-based FTP client would creep across the screen as the 1.44MB disk images would slowly download. telnet was also available on those machines, so I could telnet to things like public-access Archie servers and libraries though not Gopher. Still, FTP and telnet access opened up a lot, and I learned quite a bit in those years.

Continuing the Journey At some point, I got a copy of the Whole Internet User s Guide and Catalog, published in 1994. I still have it. If it hadn t already figured it out by then, I certainly became aware from it that Unix was the dominant operating system on the Internet. The examples in Whole Internet covered FTP, telnet, gopher all assuming the user somehow got to a Unix prompt. The web was introduced about 300 pages in; clearly viewed as something that wasn t page 1 material. And it covered the command-line www client before introducing the graphical Mosaic. Even then, though, the book highlighted Mosaic s utility as a front-end for Gopher and FTP, and even the ability to launch telnet sessions by clicking on links. But having a copy of the book didn t equate to having any way to run Mosaic. The machines in the computer lab I mentioned above all ran DOS and were incapable of running a graphical browser. I had no SLIP or PPP (both ways to run Internet traffic over a modem) connectivity at home. In short, the Web was something for the large institutional users at the time.

CD-ROMs As CD-ROMs came out, with their huge (for the day) 650MB capacity, various companies started collecting software that could be downloaded on the Internet and selling it on CD-ROM. The two most popular ones were Walnut Creek CD-ROM and Infomagic. One could buy extensive Shareware and gaming collections, and then even entire Linux and BSD distributions. Although not exactly an Internet service per se, it was a way of bringing what may ordinarily only be accessible to institutional users into the home computer realm.

Free Software Jumps In As I mentioned, by the mid 90s, I had come across RMS s writings about free software most probably his 1992 essay Why Software Should Be Free. (Please note, this is not a commentary on the more recently-revealed issues surrounding RMS, but rather his writings and work as I encountered them in the 90s.) The notion of a Free operating system not just in cost but in openness was incredibly appealing. Not only could I tinker with it to a much greater extent due to having source for everything, but it included so much software that I d otherwise have to pay for. Compilers! Interpreters! Editors! Terminal emulators! And, especially, server software of all sorts. There d be no way I could afford or run Netware, but with a Free Unixy operating system, I could do all that. My interest was obviously piqued. Add to that the fact that I could actually participate and contribute I was about to become hooked on something that I ve stayed hooked on for decades. But then the question was: which Free operating system? Eventually I chose FreeBSD to begin with; that would have been sometime in 1995. I don t recall the exact reasons for that. I remember downloading Slackware install floppies, and probably the fact that Debian wasn t yet at 1.0 scared me off for a time. FreeBSD s fantastic Handbook far better than anything I could find for Linux at the time was no doubt also a factor.

The de Raadt Factor Why not NetBSD or OpenBSD? The short answer is Theo de Raadt. Somewhere in this time, when I was somewhere between 14 and 16 years old, I asked some questions comparing NetBSD to the other two free BSDs. This was on a NetBSD mailing list, but for some reason Theo saw it and got a flame war going, which CC d me. Now keep in mind that even if NetBSD had a web presence at the time, it would have been minimal, and I would have not all that unusually for the time had no way to access it. I was certainly not aware of the, shall we say, acrimony between Theo and NetBSD. While I had certainly seen an online flamewar before, this took on a different and more disturbing tone; months later, Theo randomly emailed me under the subject SLIME saying that I was, well, SLIME . I seem to recall periodic emails from him thereafter reminding me that he hates me and that he had blocked me. (Disclaimer: I have poor email archives from this period, so the full details are lost to me, but I believe I am accurately conveying these events from over 25 years ago) This was a surprise, and an unpleasant one. I was trying to learn, and while it is possible I didn t understand some aspect or other of netiquette (or Theo s personal hatred of NetBSD) at the time, still that is not a reason to flame a 16-year-old (though he would have had no way to know my age). This didn t leave any kind of scar, but did leave a lasting impression; to this day, I am particularly concerned with how FLOSS projects handle poisonous people. Debian, for instance, has come a long way in this over the years, and even Linus Torvalds has turned over a new leaf. I don t know if Theo has. In any case, I didn t use NetBSD then. I did try it periodically in the years since, but never found it compelling enough to justify a large switch from Debian. I never tried OpenBSD for various reasons, but one of them was that I didn t want to join a community that tolerates behavior such as Theo s from its leader.

Moving to FreeBSD Moving from OS/2 to FreeBSD was final. That is, I didn t have enough hard drive space to keep both. I also didn t have the backup capacity to back up OS/2 completely. My BBS, which ran Virtual BBS (and at some point also AdeptXBBS) was deleted and reincarnated in a different form. My BBS was a member of both FidoNet and VirtualNet; the latter was specific to VBBS, and had to be dropped. I believe I may have also had to drop the FidoNet link for a time. This was the biggest change of computing in my life to that point. The earlier experiences hadn t literally destroyed what came before. OS/2 could still run my DOS programs. Its command shell was quite DOS-like. It ran Windows programs. I was going to throw all that away and leap into the unknown. I wish I had saved a copy of my BBS; I would love to see the messages I exchanged back then, or see its menu screens again. I have little memory of what it looked like. But other than that, I have no regrets. Pursuing Free, Unixy operating systems brought me a lot of enjoyment and a good career. That s not to say it was easy. All the problems of not being in the Microsoft ecosystem were magnified under FreeBSD and Linux. In a day before EDID, monitor timings had to be calculated manually and you risked destroying your monitor if you got them wrong. Word processing and spreadsheet software was pretty much not there for FreeBSD or Linux at the time; I was therefore forced to learn LaTeX and actually appreciated that. Software like PageMaker or CorelDraw was certainly nowhere to be found for those free operating systems either. But I got a ton of new capabilities. I mentioned the BBS didn t shut down, and indeed it didn t. I ran what was surely a supremely unique oddity: a free, dialin Unix shell server in the middle of a small town in Kansas. I m sure I provided things such as pine for email and some help text and maybe even printouts for how to use it. The set of callers slowly grew over the time period, in fact. And then I got UUCP.

Enter UUCP Even throughout all this, there was no local Internet provider and things were still long distance. I had Internet Email access via assorted strange routes, but they were all strange. And, I wanted access to Usenet. In 1995, it happened. The local ISP I mentioned offered UUCP access. Though I couldn t afford the dialup shell (or later, SLIP/PPP) that they offered due to long-distance costs, UUCP s very efficient batched processes looked doable. I believe I established that link when I was 15, so in 1995. I worked to register my domain, complete.org, as well. At the time, the process was a bit lengthy and involved downloading a text file form, filling it out in a precise way, sending it to InterNIC, and probably mailing them a check. Well I did that, and in September of 1995, complete.org became mine. I set up sendmail on my local system, as well as INN to handle the limited Usenet newsfeed I requested from the ISP. I even ran Majordomo to host some mailing lists, including some that were surprisingly high-traffic for a few-times-a-day long-distance modem UUCP link! The modem client programs for FreeBSD were somewhat less advanced than for OS/2, but I believe I wound up using Minicom or Seyon to continue to dial out to BBSs and, I believe, continue to use Learning Link. So all the while I was setting up my local BBS, I continued to have access to the text Internet, consisting of chiefly Gopher for me.

Switching to Debian I switched to Debian sometime in 1995 or 1996, and have been using Debian as my primary OS ever since. I continued to offer shell access, but added the WorldVU Atlantis menuing BBS system. This provided a return of a more BBS-like interface (by default; shell was still an uption) as well as some BBS door games such as LoRD and TradeWars 2002, running under DOS emulation. I also continued to run INN, and ran ifgate to allow FidoNet echomail to be presented into INN Usenet-like newsgroups, and netmail to be gated to Unix email. This worked pretty well. The BBS continued to grow in these days, peaking at about two dozen total user accounts, and maybe a dozen regular users.

Dial-up access availability I believe it was in 1996 that dial up PPP access finally became available in my small town. What a thrill! FINALLY! I could now FTP, use Gopher, telnet, and the web all from home. Of course, it was at modem speeds, but still. (Strangely, I have a memory of accessing the Web using WebExplorer from OS/2. I don t know exactly why; it s possible that by this time, I had upgraded to a 486 DX2/66 and was able to reinstall OS/2 on the old 25MHz 486, or maybe something was wrong with the timeline from my memories from 25 years ago above. Or perhaps I made the occasional long-distance call somewhere before I ditched OS/2.) Gopher sites still existed at this point, and I could access them using Netscape Navigator which likely became my standard Gopher client at that point. I don t recall using UMN text-mode gopher client locally at that time, though it s certainly possible I did.

The city Starting when I was 15, I took computer science classes at Wichita State University. The first one was a class in the summer of 1995 on C++. I remember being worried about being good enough for it I was, after all, just after my HS freshman year and had never taken the prerequisite C class. I loved it and got an A! By 1996, I was taking more classes. In 1996 or 1997 I stayed in Wichita during the day due to having more than one class. So, what would I do then but enjoy the computer lab? The CS dept. had two of them: one that had NCD X terminals connected to a pair of SunOS servers, and another one running Windows. I spent most of the time in the Unix lab with the NCDs; I d use Netscape or pine, write code, enjoy the University s fast Internet connection, and so forth. In 1997 I had graduated high school and that summer I moved to Wichita to attend college. As was so often the case, I shut down the BBS at that time. It would be 5 years until I again dealt with Internet at home in a rural community. By the time I moved to my apartment in Wichita, I had stopped using OS/2 entirely. I have no memory of ever having OS/2 there. Along the way, I had bought a Pentium 166, and then the most expensive piece of computing equipment I have ever owned: a DEC Alpha, which, of course, ran Linux.

ISDN I must have used dialup PPP for a time, but I eventually got a job working for the ISP I had used for UUCP, and then PPP. While there, I got a 128Kbps ISDN line installed in my apartment, and they gave me a discount on the service for it. That was around 3x the speed of a modem, and crucially was always on and gave me a public IP. No longer did I have to use UUCP; now I got to host my own things! By at least 1998, I was running a web server on www.complete.org, and I had an FTP server going as well.

Even Bigger Cities In 1999 I moved to Dallas, and there got my first broadband connection: an ADSL link at, I think, 1.5Mbps! Now that was something! But it had some reliability problems. I eventually put together a server and had it hosted at an acquantaince s place who had SDSL in his apartment. Within a couple of years, I had switched to various kinds of proper hosting for it, but that is a whole other article. In Indianapolis, I got a cable modem for the first time, with even tighter speeds but prohibitions on running servers on it. Yuck.

Challenges Being non-Microsoft continued to have challenges. Until the advent of Firefox, a web browser was one of the biggest. While Netscape supported Linux on i386, it didn t support Linux on Alpha. I hobbled along with various attempts at emulators, old versions of Mosaic, and so forth. And, until StarOffice was open-sourced as Open Office, reading Microsoft file formats was also a challenge, though WordPerfect was briefly available for Linux. Over the years, I have become used to the Linux ecosystem. Perhaps I use Gimp instead of Photoshop and digikam instead of well, whatever somebody would use on Windows. But I get ZFS, and containers, and so much that isn t available there. Yes, I know Apple never went away and is a thing, but for most of the time period I discuss in this article, at least after the rise of DOS, it was niche compared to the PC market.

Back to Kansas In 2002, I moved back to Kansas, to a rural home near a different small town in the county next to where I grew up. Over there, it was back to dialup at home, but I had faster access at work. I didn t much care for this, and thus began a 20+-year effort to get broadband in the country. At first, I got a wireless link, which worked well enough in the winter, but had serious problems in the summer when the trees leafed out. Eventually DSL became available locally highly unreliable, but still, it was something. Then I moved back to the community I grew up in, a few miles from where I grew up. Again I got DSL a bit better. But after some years, being at the end of the run of DSL meant I had poor speeds and reliability problems. I eventually switched to various wireless ISPs, which continues to the present day; while people in cities can get Gbps service, I can get, at best, about 50Mbps. Long-distance fees are gone, but the speed disparity remains.

Concluding Reflections I am glad I grew up where I did; the strong community has a lot of advantages I don t have room to discuss here. In a number of very real senses, having no local services made things a lot more difficult than they otherwise would have been. However, perhaps I could say that I also learned a lot through the need to come up with inventive solutions to those challenges. To this day, I think a lot about computing in remote environments: partially because I live in one, and partially because I enjoy visiting places that are remote enough that they have no Internet, phone, or cell service whatsoever. I have written articles like Tools for Communicating Offline and in Difficult Circumstances based on my own personal experience. I instinctively think about making protocols robust in the face of various kinds of connectivity failures because I experience various kinds of connectivity failures myself.

(Almost) Everything Lives On In 2002, Gopher turned 10 years old. It had probably been about 9 or 10 years since I had first used Gopher, which was the first way I got on live Internet from my house. It was hard to believe. By that point, I had an always-on Internet link at home and at work. I had my Alpha, and probably also at least PCMCIA Ethernet for a laptop (many laptops had modems by the 90s also). Despite its popularity in the early 90s, less than 10 years after it came on the scene and started to unify the Internet, it was mostly forgotten. And it was at that moment that I decided to try to resurrect it. The University of Minnesota finally released it under an Open Source license. I wrote the first new gopher server in years, pygopherd, and introduced gopher to Debian. Gopher lives on; there are now quite a few Gopher clients and servers out there, newly started post-2002. The Gemini protocol can be thought of as something akin to Gopher 2.0, and it too has a small but blossoming ecosystem. Archie, the old FTP search tool, is dead though. Same for WAIS and a number of the other pre-web search tools. But still, even FTP lives on today. And BBSs? Well, they didn t go away either. Jason Scott s fabulous BBS documentary looks back at the history of the BBS, while Back to the BBS from last year talks about the modern BBS scene. FidoNet somehow is still alive and kicking. UUCP still has its place and has inspired a whole string of successors. Some, like NNCP, are clearly direct descendents of UUCP. Filespooler lives in that ecosystem, and you can even see UUCP concepts in projects as far afield as Syncthing and Meshtastic. Usenet still exists, and you can now run Usenet over NNCP just as I ran Usenet over UUCP back in the day (which you can still do as well). Telnet, of course, has been largely supplanted by ssh, but the concept is more popular now than ever, as Linux has made ssh be available on everything from Raspberry Pi to Android. And I still run a Gopher server, looking pretty much like it did in 2002. This post also has a permanent home on my website, where it may be periodically updated.

14 August 2022

Russ Allbery: Review: Still Not Safe

Review: Still Not Safe, by Robert L. Wears & Kathleen M. Sutcliffe
Publisher: Oxford University Press
Copyright: November 2019
ISBN: 0-19-027128-0
Format: Kindle
Pages: 232
Still Not Safe is an examination of the recent politics and history of patient safety in medicine. Its conclusions are summarized by the opening paragraph of the preface:
The American moral and social philosopher Eric Hoffer reportedly said that every great cause begins as a movement, becomes a business, and eventually degenerates into a racket. The reform movement to make healthcare safer is clearly a great cause, but patient safety efforts are increasingly following Hoffer's path.
Robert Wears was Professor of Emergency Medicine at the University of Florida specializing in patient safety. Kathleen Sutcliffe is Professor of Medicine and Business at Johns Hopkins. This book is based on research funded by a grant from the Robert Wood Johnson Foundation, for which both Wears and Sutcliffe were primary investigators. (Wears died in 2017, but the acknowledgments imply that at least early drafts of the book existed by that point and it was indeed co-written.) The anchor of the story of patient safety in Still Not Safe is the 1999 report from the Institute of Medicine entitled To Err is Human, to which the authors attribute an explosion of public scrutiny of medical safety. The headline conclusion of that report, which led nightly news programs after its release, was that 44,000 to 120,000 people died each year in the United States due to medical error. This report prompted government legislation, funding for new safety initiatives, a flurry of follow-on reports, and significant public awareness of medical harm. What it did not produce, in the authors' view, is significant improvements in patient safety. The central topic of this book is an analysis of why patient safety efforts have had so little measurable effect. The authors attribute this to three primary causes: an unwillingness to involve safety experts from outside medicine or absorb safety lessons from other disciplines, an obsession with human error that led to profound misunderstandings of the nature of safety, and the misuse of safety concerns as a means to centralize control of medical practice in the hands of physician-administrators. (The term used by the authors is "managerial, scientific-bureaucratic medicine," which is technically accurate but rather awkward.) Biggest complaint first: This book desperately needed examples, case studies, or something to make these ideas concrete. There are essentially none in 230 pages apart from passing mentions of famous cases of medical error that added to public pressure, and a tantalizing but maddeningly nonspecific discussion of the atypically successful effort to radically improve the safety of anesthesia. Apparently anesthesiologists involved safety experts from outside medicine, avoided a focus on human error, turned safety into an engineering problem, and made concrete improvements that had a hugely positive impact on the number of adverse events for patients. Sounds fascinating! Alas, I'm just as much in the dark about what those improvements were as I was when I started reading this book. Apart from a vague mention of some unspecified improvements to anesthesia machines, there are no concrete descriptions whatsoever. I understand that the authors were probably leery of giving too many specific examples of successful safety initiatives since one of their core points is that safety is a mindset and philosophy rather than a replicable set of actions, and copying the actions of another field without understanding their underlying motivations or context within a larger system is doomed to failure. But you have to give the reader something, or the book starts feeling like a flurry of abstract assertions. Much is made here of the drawbacks of a focus on human error, and the superiority of the safety analysis done in other fields that have moved beyond error-centric analysis (and in some cases have largely discarded the word "error" as inherently unhelpful and ambiguous). That leads naturally to showing an analysis of an adverse incident through an error lens and then through a more nuanced safety lens, making the differences concrete for the reader. It was maddening to me that the authors never did this. This book was recommended to me as part of a discussion about safety and reliability in tech and the need to learn from safety practices in other fields. In that context, I didn't find it useful, although surprisingly that's because the thinking in medicine (at least as presented by these authors) seems behind the current thinking in distributed systems. The idea that human error is not a useful model for approaching reliability is standard in large tech companies, nearly all of which use blameless postmortems for exactly that reason. Tech, similar to medicine, does have a tendency to be insular and not look outside the field for good ideas, but the approach to large-scale reliability in tech seems to have avoided the other traps discussed here. (Security is another matter, but security is also adversarial, which creates different problems that I suspect require different tools.) What I did find fascinating in this book, although not directly applicable to my own work, is the way in which a focus on human error becomes a justification for bureaucratic control and therefore a concentration of power in a managerial layer. If the assumption is that medical harm is primarily caused by humans making avoidable mistakes, and therefore the solution is to prevent humans from making mistakes through better training, discipline, or process, this creates organizations that are divided into those who make the rules and those who follow the rules. The long-term result is a practice of medicine in which a small number of experts decide the correct treatment for a given problem, and then all other practitioners are expected to precisely follow that treatment plan to avoid "errors." (The best distributed systems approaches may avoid this problem, but this failure mode seems nearly universal in technical support organizations.) I was startled by how accurate that portrayal of medicine felt. My assumption prior to reading this book was that the modern experience of medicine as an assembly line with patients as widgets was caused by the pressure for higher "productivity" and thus shorter visit times, combined with (in the US) the distorting effects of our broken medical insurance system. After reading this book, I've added a misguided way of thinking about medical error and risk avoidance to that analysis. One of the authors' points (which, as usual, I wish they'd made more concrete with a case study) is that the same thought process that lets a doctor make a correct diagnosis and find a working treatment is the thought process that may lead to an incorrect diagnosis or treatment. There is not a separable state of "mental error" that can be eliminated. Decision-making processes are more complicated and more integrated than that. If you try to prevent "errors" by eliminating flexibility, you also eliminate vital tools for successfully treating patients. The authors are careful to point out that the prior state of medicine in which each doctor was a force to themselves and there was no role for patient safety as a discipline was also bad for safety. Reverting to the state of medicine before the advent of the scientific-bureaucratic error-avoiding culture is also not a solution. But, rather at odds with other popular books about medicine, the authors are highly critical of safety changes focused on human error prevention, such as mandatory checklists. In their view, this is exactly the sort of attempt to blindly copy the machinery of safety in another field (in this case, air travel) without understanding the underlying purpose and system of which it's a part. I am not qualified to judge the sharp dispute over whether there is solid clinical evidence that checklists are helpful (these authors claim there is not; I know other books make different claims, and I suspect it may depend heavily on how the checklist is used). But I found the authors' argument that one has to design systems holistically for safety, not try to patch in safety later by turning certain tasks into rote processes and humans into machines, to be persuasive. I'm not willing to recommend this book given how devoid it is of concrete examples. I was able to fill in some of that because of prior experience with the literature on site reliability engineering, but a reader who wasn't previously familiar with discussions of safety or reliability may find much of this book too abstract to be comprehensible. But I'm not sorry I read it. I hadn't previously thought about the power dynamics of a focus on error, and I think that will be a valuable observation to keep in mind. Rating: 6 out of 10

19 July 2022

Craig Small: Linux Memory Statistics

Pretty much everyone who has spent some time on a command line in Linux would have looked at the free command. This command provides some overall statistics on the memory and how it is used. Typical output looks something like this:
             total        used        free      shared  buff/cache  available
Mem:      32717924     3101156    26950016      143608     2666752  29011928
Swap:      1000444           0     1000444
Memory sits in the first row after the headers then we have the swap statistics. Most of the numbers are directly fetched from the procfs file /proc/meminfo which are scaled and presented to the user. A good example of a simple stat is total, which is just the MemTotal row located in that file. For the rest of this post, I ll make the rows from /proc/meminfo have an amber background. What is Free, and what is Used? While you could say that the free value is also merely the MemFree row, this is where Linux memory statistics start to get odd. While that value is indeed what is found for MemFree and not a calculated field, it can be misleading. Most people would assume that Free means free to use, with the implication that only this amount of memory is free to use and nothing more. That would also mean the used value is really used by something and nothing else can use it. In the early days of free and Linux statistics in general that was how it looked. Used is a calculated field (there is no MemUsed row) and was, initially, Total - Free. The problem was, Used also included Buffers and Cached values. This meant that it looked like Linux was using a lot of memory for something. If you read old messages before 2002 that are talking about excessive memory use, they quite likely are looking at the values printed by free. The thing was, under memory pressure the kernel could release Buffers and Cached for use. Not all of the storage but some of it so it wasn t all used. To counter this, free showed a row between Memory and Swap with Used having Buffers and Cached removed and Free having the same values added:
             total       used       free     shared    buffers     cached
Mem:      32717924    6063648   26654276          0     313552    2234436
-/+ buffers/cache:    3515660   29202264
Swap:      1000444          0    1000444
You might notice that this older version of free from around 2001 shows buffers and cached separately and there s no available column (we ll get to Available later.) Shared appears as zero because the old row was labelled MemShared and not Shmem which was changed in Linux 2.6 and I m running a system way past that version. It s not ideal, you can say that the amount of free memory is something above 26654276 and below 29202264 KiB but nothing more accurate. buffers and cached are almost never all-used or all-unused so the real figure is not either of those numbers but something in-between. Cached, just not for Caches That appeared to be an uneasy truce within the Linux memory statistics world for a while. By 2014 we realised that there was a problem with Cached. This field used to have the memory used for a cache for files read from storage. While this value still has that component, it was also being used for tmpfs storage and the use of tmpfs went from an interesting idea to being everywhere. Cheaper memory meant larger tmpfs partitions went from a luxury to something everyone was doing. The problem is with large files put into a tmpfs partition the Free would decrease, but so would Cached meaning the free column in the -/+ row would not change much and understate the impact of files in tmpfs. Lucky enough in Linux 2.6.32 the developers added a Shmem row which was the amount of memory used for shmem and tmpfs. Subtracting that value from Cached gave you the real cached value which we call main_cache and very briefly this is what the cached value would show in free. However, this caused further problems because not all Shem can be reclaimed and reused and probably swapped one set of problematic values for another. It did however prompt the Linux kernel community to have a look at the problem. Enter Available There was increasing awareness of the issues with working out how much memory a system has free within the kernel community. It wasn t just the output of free or the percentage values in top, but load balancer or workload placing systems would have their own view of this value. As memory management and use within the Linux kernel evolved, what was or wasn t free changed and all the userland programs were expected somehow to keep up. The kernel developers realised the best place to get an estimate of the memory not used was in the kernel and they created a new memory statistic called Available. That way if how the memory is used or set to be unreclaimable they could change it and userland programs would go along with it. procps has a fallback for this value and it s a pretty complicated setup.
  1. Find the min_free_kybtes setting in sysfs which is the minimum amount of free memory the kernel will handle
  2. Add a 25% to this value (e.g. if it was 4000 make it 5000), this is the low watermark
  3. To find available, start with MemFree and subtract the low watermark
  4. If half of both Inactive(file) and Active(file) values are greater than the low watermark, add that half value otherwise add the sum of the values minus the low watermark
  5. If half of Slab Claimable is greater than the low watermark, add that half value otherwise add Slab Claimable minus the low watermark
  6. If what you get is less than zero, make available zero
  7. Or, just look at Available in /proc/meminfo
For the free program, we added the Available value and the +/- line was removed. The main_cache value was Cached + Slab while Used was calculated as Total - Free - main_cache - Buffers. This was very close to what the Used column in the +/- line used to show. What s on the Slab? The next issue that came across was the use of slabs. At this point, main_cache was Cached + Slab, but Slab consists of reclaimable and unreclaimable components. One part of Slab can be used elsewhere if needed and the other cannot but the procps tools treated them the same. The Used calculation should not subtract SUnreclaim from the Total because it is actually being used. So in 2015 main_cache was changed to be Cached + SReclaimable. This meant that Used memory was calculated as Total - Free - Cached - SReclaimable - Buffers. Revenge of tmpfs and the return of Available The tmpfs impacting Cached was still an issue. If you added a 10MB file into a tmpfs partition, then Free would reduce by 10MB and Cached would increase by 10MB meaning Used stayed unchanged even though 10MB had gone somewhere. It was time to retire the complex calculation of Used. For procps 4.0.1 onwards, Used now means not available . We take the Total memory and subtract the Available memory. This is not a perfect setup but it is probably going to be the best one we have and testing is giving us much more sensible results. It s also easier for people to understand (take the total value you see in free, then subtract the available value). What does that mean for main_cache which is part of the buff/cache value you see? As this value is no longer in the used memory calculation, it is less important. Should it also be reverted to simply Cached without the reclaimable Slabs? The calculated fields In summary, what this means for the calculated fields in procps at least is: Almost everything else, with the exception of some bounds checking, is what you get out of /proc/meminfo which is straight from the kernel.

6 July 2022

Russ Allbery: Review: A Master of Djinn

Review: A Master of Djinn, by P. Dj l Clark
Series: Dead Djinn Universe #1
Publisher: Tordotcom
Copyright: 2021
ISBN: 1-250-26767-6
Format: Kindle
Pages: 391
A Master of Djinn is the first novel in the Dead Djinn Universe, but (as you might guess from the series title) is a direct sequel to the novelette "A Dead Djinn in Cairo". The novelette is not as good as the novel, but I recommend reading it first for the character introductions and some plot elements that carry over. Reading The Haunting of Tram Car 015 first is entirely optional. In 1912 in a mansion in Giza, a secret society of (mostly) British men is meeting. The Hermetic Brotherhood of Al-Jahiz is devoted to unlocking the mysteries of the Soudanese mystic al-Jahiz. In our world, these men would likely be colonialist plunderers. In this world, they still aspire to that role, but they're playing catch-up. Al-Jahiz bored into the Kaf, releasing djinn and magic into the world and making Egypt a world power in its own right. Now, its cities are full of clockwork marvels, djinn walk the streets as citizens, and British rule has been ejected from India and Africa by local magic. This group of still-rich romantics and crackpots hopes to discover the knowledge lost when al-Jahiz disappeared. They have not had much success. This will not save their lives. Fatma el-Sha'arawi is a special investigator for the Ministry of Alchemy, Enchantments, and Supernatural Entities. Her job is sorting out the problems caused by this new magic, such as a couple of young thieves with a bottle full of sleeping djinn whose angry reaction to being unexpectedly woken has very little to do with wishes. She is one of the few female investigators in a ministry that is slowly modernizing with the rest of society (Egyptian women just got the vote). She's also the one called to investigate the murder of a secret society of British men and a couple of Cairenes by a black-robed man in a golden mask. The black-robed man claims to be al-Jahiz returned, and proves to be terrifyingly adept at manipulating crowds and sparking popular dissent. Fatma and the Ministry's first attempt to handle him is a poorly-judged confrontation stymied by hostile crowds, the man's duplicating bodyguard, and his own fighting ability. From there, it's a race between Fatma's pursuit of linear clues and the black-robed man's efforts to destabilize society. This, like the previous short stories, is a police procedural, but it has considerably more room to breathe at novel length. That serves it well, since as with "A Dead Djinn in Cairo" the procedural part is a linear, reactive vehicle for plot exposition. I was more invested in Fatma's relationships with the supporting characters. Since the previous story, she's struck up a romance with Siti, a highly competent follower of the old Egyptian gods (Hathor in particular) and my favorite character in the book. She's also been assigned a new partner, Hadia, a new graduate and another female agent. The slow defeat of Fatma's irritation at not being allowed to work alone by Hadia's cheerful competence and persistence (and willingness to do paperwork) adds a lot to the characterization. The setting felt a bit less atmospheric than The Haunting of Tram Car 015, but we get more details of international politics, and they're a delight. Clark takes obvious (and warranted) glee in showing how the reintroduction of magic has shifted the balance of power away from the colonial empires. Cairo is a bustling steampunk metropolis and capital of a world power, welcoming envoys from West African kingdoms alongside the (still racist and obnoxious but now much less powerful) British and other Europeans. European countries were forced to search their own mythology for possible sources of magic power, which leads to the hilarious scene of the German Kaiser carrying a sleepy goblin on his shoulder to monitor his diplomacy. The magic of the story was less successful for me, although still enjoyable. The angels from "A Dead Djinn in Cairo" make another appearance and again felt like the freshest bit of world-building, but we don't find out much more about them. I liked the djinn and their widely-varied types and magic, but apart from them and a few glimpses of Egypt's older gods, that was the extent of the underlying structure. There is a significant magical artifact, but the characters are essentially handed an instruction manual, use it according to its instructions, and it then does what it was documented to do. It was a bit unsatisfying. I'm the type of fantasy reader who always wants to read the sourcebook for the magic system, but this is not that sort of a book. Instead, it's the kind of book where the investigator steadily follows a linear trail of clues and leads until they reach the final confrontation. Here, the confrontation felt remarkably like cut scenes from a Japanese RPG: sudden vast changes in scale, clockwork constructs, massive monsters, villains standing on mobile platforms, and surprise combat reversals. I could almost hear the fight music and see the dialog boxes pop up. This isn't exactly a complaint I love Japanese RPGs but it did add to the feeling that the plot was on rails and didn't require many decisions from the protagonist. Clark also relies on an overused plot cliche in the climactic battle, which was a minor disappointment. A Master of Djinn won the Nebula for best 2021 novel, I suspect largely on the basis of its setting and refreshingly non-European magical system. I don't entirely agree; the writing is still a bit clunky, with unnecessary sentences and stock phrases showing up here and there, and I think it suffers from the typical deficiencies of SFF writers writing mysteries or police procedurals without the plot sophistication normally found in that genre. But this is good stuff for a first novel, with fun supporting characters (loved the librarian) and some great world-building. I would happily read more in this universe. Rating: 7 out of 10

3 June 2022

Fran ois Marier: Using Gandi DNS for Let's Encrypt certbot verification

I had some problems getting the Gandi certbot plugin to work in Debian bullseye since the documentation appears to be outdated. When running certbot renew --dry-run, I saw the following error message:
Plugin legacy name certbot-plugin-gandi:dns may be removed in a future version. Please use dns instead.
Thanks to an issue in another DNS plugin, I was able to easily update my configuration to the new naming convention.

Setup Get an API key from Gandi and then put it in /etc/letsencrypt/gandi.ini:
# live dns v5 api key
dns_api_key=ABCDEF
before make it only readable by root:
chown root:root /etc/letsencrypt/gandi.ini
chmod 600 /etc/letsencrypt/gandi.ini
Then install the required package:
apt install python3-certbot-dns-gandi

Getting an initial certificate To get an initial certificate using the Gandi plugin, simply use the following command:
certbot certonly -a dns --dns-credentials /etc/letsencrypt/gandi.ini -d example.fmarier.org

Setting up automatic renewal If you have automatic renewals enabled, you'll want to ensure your /etc/letsencrypt/renewal/example.fmarier.org.conf file looks like this:
# renew_before_expiry = 30 days
version = 1.12.0
archive_dir = /etc/letsencrypt/archive/example.fmarier.org
cert = /etc/letsencrypt/live/example.fmarier.org/cert.pem
privkey = /etc/letsencrypt/live/example.fmarier.org/privkey.pem
chain = /etc/letsencrypt/live/example.fmarier.org/chain.pem
fullchain = /etc/letsencrypt/live/example.fmarier.org/fullchain.pem
[renewalparams]
account = abcdef
authenticator = dns
server = https://acme-v02.api.letsencrypt.org/directory
dns_credentials = /etc/letsencrypt/gandi.ini

18 May 2022

Reproducible Builds: Supporter spotlight: Jan Nieuwenhuizen on Bootstrappable Builds, GNU Mes and GNU Guix

The Reproducible Builds project relies on several projects, supporters and sponsors for financial support, but they are also valued as ambassadors who spread the word about our project and the work that we do. This is the fourth instalment in a series featuring the projects, companies and individuals who support the Reproducible Builds project. We started this series by featuring the Civil Infrastructure Platform project and followed this up with a post about the Ford Foundation as well as a recent ones about ARDC and the Google Open Source Security Team (GOSST). Today, however, we will be talking with Jan Nieuwenhuizen about Bootstrappable Builds, GNU Mes and GNU Guix.
Chris Lamb: Hi Jan, thanks for taking the time to talk with us today. First, could you briefly tell me about yourself? Jan: Thanks for the chat; it s been a while! Well, I ve always been trying to find something new and interesting that is just asking to be created but is mostly being overlooked. That s how I came to work on GNU Guix and create GNU Mes to address the bootstrapping problem that we have in free software. It s also why I have been working on releasing Dezyne, a programming language and set of tools to specify and formally verify concurrent software systems as free software. Briefly summarised, compilers are often written in the language they are compiling. This creates a chicken-and-egg problem which leads users and distributors to rely on opaque, pre-built binaries of those compilers that they use to build newer versions of the compiler. To gain trust in our computing platforms, we need to be able to tell how each part was produced from source, and opaque binaries are a threat to user security and user freedom since they are not auditable. The goal of bootstrappability (and the bootstrappable.org project in particular) is to minimise the amount of these bootstrap binaries. Anyway, after studying Physics at Eindhoven University of Technology (TU/e), I worked for digicash.com, a startup trying to create a digital and anonymous payment system sadly, however, a traditional account-based system won. Separate to this, as there was no software (either free or proprietary) to automatically create beautiful music notation, together with Han-Wen Nienhuys, I created GNU LilyPond. Ten years ago, I took the initiative to co-found a democratic school in Eindhoven based on the principles of sociocracy. And last Christmas I finally went vegan, after being mostly vegetarian for about 20 years!
Chris: For those who have not heard of it before, what is GNU Guix? What are the key differences between Guix and other Linux distributions? Jan: GNU Guix is both a package manager and a full-fledged GNU/Linux distribution. In both forms, it provides state-of-the-art package management features such as transactional upgrades and package roll-backs, hermetical-sealed build environments, unprivileged package management as well as per-user profiles. One obvious difference is that Guix forgoes the usual Filesystem Hierarchy Standard (ie. /usr, /lib, etc.), but there are other significant differences too, such as Guix being scriptable using Guile/Scheme, as well as Guix s dedication and focus on free software.
Chris: How does GNU Guix relate to GNU Mes? Or, rather, what problem is Mes attempting to solve? Jan: GNU Mes was created to address the security concerns that arise from bootstrapping an operating system such as Guix. Even if this process entirely involves free software (i.e. the source code is, at least, available), this commonly uses large and unauditable binary blobs. Mes is a Scheme interpreter written in a simple subset of C and a C compiler written in Scheme, and it comes with a small, bootstrappable C library. Twice, the Mes bootstrap has halved the size of opaque binaries that were needed to bootstrap GNU Guix. These reductions were achieved by first replacing GNU Binutils, GNU GCC and the GNU C Library with Mes, and then replacing Unix utilities such as awk, bash, coreutils, grep sed, etc., by Gash and Gash-Utils. The final goal of Mes is to help create a full-source bootstrap for any interested UNIX-like operating system.
Chris: What is the current status of Mes? Jan: Mes supports all that is needed from R5RS and GNU Guile to run MesCC with Nyacc, the C parser written for Guile, for 32-bit x86 and ARM. The next step for Mes would be more compatible with Guile, e.g., have guile-module support and support running Gash and Gash Utils. In working to create a full-source bootstrap, I have disregarded the kernel and Guix build system for now, but otherwise, all packages should be built from source, and obviously, no binary blobs should go in. We still need a Guile binary to execute some scripts, and it will take at least another one to two years to remove that binary. I m using the 80/20 approach, cutting corners initially to get something working and useful early. Another metric would be how many architectures we have. We are quite a way with ARM, tinycc now works, but there are still problems with GCC and Glibc. RISC-V is coming, too, which could be another metric. Someone has looked into picking up NixOS this summer. How many distros do anything about reproducibility or bootstrappability? The bootstrappability community is so small that we don t need metrics, sadly. The number of bytes of binary seed is a nice metric, but running the whole thing on a full-fledged Linux system is tough to put into a metric. Also, it is worth noting that I m developing on a modern Intel machine (ie. a platform with a management engine), that s another key component that doesn t have metrics.
Chris: From your perspective as a Mes/Guix user and developer, what does reproducibility mean to you? Are there any related projects? Jan: From my perspective, I m more into the problem of bootstrapping, and reproducibility is a prerequisite for bootstrappability. Reproducibility clearly takes a lot of effort to achieve, however. It s relatively easy to install some Linux distribution and be happy, but if you look at communities that really care about security, they are investing in reproducibility and other ways of improving the security of their supply chain. Projects I believe are complementary to Guix and Mes include NixOS, Debian and on the hardware side the RISC-V platform shares many of our core principles and goals.
Chris: Well, what are these principles and goals? Jan: Reproducibility and bootstrappability often feel like the next step in the frontier of free software. If you have all the sources and you can t reproduce a binary, that just doesn t feel right anymore. We should start to desire (and demand) transparent, elegant and auditable software stacks. To a certain extent, that s always been a low-level intent since the beginning of free software, but something clearly got lost along the way. On the other hand, if you look at the NPM or Rust ecosystems, we see a world where people directly install binaries. As they are not as supportive of copyleft as the rest of the free software community, you can see that movement and people in our area are doing more as a response to that so that what we have continues to remain free, and to prevent us from falling asleep and waking up in a couple of years and see, for example, Rust in the Linux kernel and (more importantly) we require big binary blobs to use our systems. It s an excellent time to advance right now, so we should get a foothold in and ensure we don t lose any more.
Chris: What would be your ultimate reproducibility goal? And what would the key steps or milestones be to reach that? Jan: The ultimate goal would be to have a system built with open hardware, with all software on it fully bootstrapped from its source. This bootstrap path should be easy to replicate and straightforward to inspect and audit. All fully reproducible, of course! In essence, we would have solved the supply chain security problem. Our biggest challenge is ignorance. There is much unawareness about the importance of what we are doing. As it is rather technical and doesn t really affect everyday computer use, that is not surprising. This unawareness can be a great force driving us in the opposite direction. Think of Rust being allowed in the Linux kernel, or Python being required to build a recent GNU C library (glibc). Also, the fact that companies like Google/Apple still want to play us vs them , not willing to to support GPL software. Not ready yet to truly support user freedom. Take the infamous log4j bug everyone is using open source these days, but nobody wants to take responsibility and help develop or nurture the community. Not ecosystem , as that s how it s being approached right now: live and let live/die: see what happens without taking any responsibility. We are growing and we are strong and we can do a lot but if we have to work against those powers, it can become problematic. So, let s spread our great message and get more people involved!
Chris: What has been your biggest win? Jan: From a technical point of view, the full-source bootstrap has have been our biggest win. A talk by Carl Dong at the 2019 Breaking Bitcoin conference stated that connecting Jeremiah Orian s Stage0 project to Mes would be the holy grail of bootstrapping, and we recently managed to achieve just that: in other words, starting from hex0, 357-byte binary, we can now build the entire Guix system. This past year we have not made significant visible progress, however, as our funding was unfortunately not there. The Stage0 project has advanced in RISC-V. A month ago, though, I secured NLnet funding for another year, and thanks to NLnet, Ekaitz Zarraga and Timothy Sample will work on GNU Mes and the Guix bootstrap as well. Separate to this, the bootstrappable community has grown a lot from two people it was six years ago: there are now currently over 100 people in the #bootstrappable IRC channel, for example. The enlarged community is possibly an even more important win going forward.
Chris: How realistic is a 100% bootstrappable toolchain? And from someone who has been working in this area for a while, is solving Trusting Trust) actually feasible in reality? Jan: Two answers: Yes and no, it really depends on your definition. One great thing is that the whole Stage0 project can also run on the Knight virtual machine, a hardware platform that was designed, I think, in the 1970s. I believe we can and must do better than we are doing today, and that there s a lot of value in it. The core issue is not the trust; we can probably all trust each other. On the other hand, we don t want to trust each other or even ourselves. I am not, personally, going to inspect my RISC-V laptop, and other people create the hardware and do probably not want to inspect the software. The answer comes back to being conscientious and doing what is right. Inserting GCC as a binary blob is not right. I think we can do better, and that s what I d like to do. The security angle is interesting, but I don t like the paranoid part of that; I like the beauty of what we are creating together and stepwise improving on that.
Chris: Thanks for taking the time to talk to us today. If someone wanted to get in touch or learn more about GNU Guix or Mes, where might someone go? Jan: Sure! First, check out: I m also on Twitter (@janneke_gnu) and on octodon.social (@janneke@octodon.social).
Chris: Thanks for taking the time to talk to us today. Jan: No problem. :)


For more information about the Reproducible Builds project, please see our website at reproducible-builds.org. If you are interested in ensuring the ongoing security of the software that underpins our civilisation and wish to sponsor the Reproducible Builds project, please reach out to the project by emailing contact@reproducible-builds.org.

9 May 2022

Robert McQueen: Evolving a strategy for 2022 and beyond

As a board, we have been working on several initiatives to make the Foundation a better asset for the GNOME Project. We re working on a number of threads in parallel, so I wanted to explain the big picture a bit more to try and connect together things like the new ED search and the bylaw changes. We re all here to see free and open source software succeed and thrive, so that people can be be truly empowered with agency over their technology, rather than being passive consumers. We want to bring GNOME to as many people as possible so that they have computing devices that they can inspect, trust, share and learn from. In previous years we ve tried to boost the relevance of GNOME (or technologies such as GTK) or solicit donations from businesses and individuals with existing engagement in FOSS ideology and technology. The problem with this approach is that we re mostly addressing people and organisations who are already supporting or contributing FOSS in some way. To truly scale our impact, we need to look to the outside world, build better awareness of GNOME outside of our current user base, and find opportunities to secure funding to invest back into the GNOME project. The Foundation supports the GNOME project with infrastructure, arranging conferences, sponsoring hackfests and travel, design work, legal support, managing sponsorships, advisory board, being the fiscal sponsor of GNOME, GTK, Flathub and we will keep doing all of these things. What we re talking about here are additional ways for the Foundation to support the GNOME project we want to go beyond these activities, and invest into GNOME to grow its adoption amongst people who need it. This has a cost, and that means in parallel with these initiatives, we need to find partners to fund this work. Neil has previously talked about themes such as education, advocacy, privacy, but we ve not previously translated these into clear specific initiatives that we would establish in addition to the Foundation s existing work. This is all a work in progress and we welcome any feedback from the community about refining these ideas, but here are the current strategic initiatives the board is working on. We ve been thinking about growing our community by encouraging and retaining diverse contributors, and addressing evolving computing needs which aren t currently well served on the desktop. Initiative 1. Welcoming newcomers. The community is already spending a lot of time welcoming newcomers and teaching them the best practices. Those activities are as time consuming as they are important, but currently a handful of individuals are running initiatives such as GSoC, Outreachy and outreach to Universities. These activities help bring diverse individuals and perspectives into the community, and helps them develop skills and experience of collaborating to create Open Source projects. We want to make those efforts more sustainable by finding sponsors for these activities. With funding, we can hire people to dedicate their time to operating these programs, including paid mentors and creating materials to support newcomers in future, such as developer documentation, examples and tutorials. This is the initiative that needs to be refined the most before we can turn it into something real. Initiative 2: Diverse and sustainable Linux app ecosystem. I spoke at the Linux App Summit about the work that GNOME and Endless has been supporting in Flathub, but this is an example of something which has a great overlap between commercial, technical and mission-based advantages. The key goal here is to improve the financial sustainability of participating in our community, which in turn has an impact on the diversity of who we can expect to afford to enter and remain in our community. We believe the existence of this is critically important for individual developers and contributors to unlock earning potential from our ecosystem, through donations or app sales. In turn, a healthy app ecosystem also improves the usefulness of the Linux desktop as a whole for potential users. We believe that we can build a case for commercial vendors in the space to join an advisory board alongside with GNOME, KDE, etc to input into the governance and contribute to the costs of growing Flathub. Initiative 3: Local-first applications for the GNOME desktop. This is what Thib has been starting to discuss on Discourse, in this thread. There are many different threats to free access to computing and information in today s world. The GNOME desktop and apps need to give users convenient and reliable access to technology which works similarly to the tools they already use everyday, but keeps them and their data safe from surveillance, censorship, filtering or just being completely cut off from the Internet. We believe that we can seek both philanthropic and grant funding for this work. It will make GNOME a more appealing and comprehensive offering for the many people who want to protect their privacy. The idea is that these initiatives all sit on the boundary between the GNOME community and the outside world. If the Foundation can grow and deliver these kinds of projects, we are reaching to new people, new contributors and new funding. These contributions and investments back into GNOME represent a true win-win for the newcomers and our existing community. (Originally posted to GNOME Discourse, please feel free to join the discussion there.)

26 April 2022

Reproducible Builds: Supporter spotlight: Google Open Source Security Team (GOSST)

The Reproducible Builds project relies on several projects, supporters and sponsors for financial support, but they are also valued as ambassadors who spread the word about our project and the work that we do. This is the fourth instalment in a series featuring the projects, companies and individuals who support the Reproducible Builds project. If you are a supporter of the Reproducible Builds project (of whatever size) and would like to be featured here, please let get in touch with us at contact@reproducible-builds.org. We started this series by featuring the Civil Infrastructure Platform project and followed this up with a post about the Ford Foundation as well as a recent one about ARDC. However, today, we ll be talking with Meder Kydyraliev of the Google Open Source Security Team (GOSST).
Chris Lamb: Hi Meder, thanks for taking the time to talk to us today. So, for someone who has not heard of the Google Open Source Security Team (GOSST) before, could you tell us what your team is about? Meder: Of course. The Google Open Source Security Team (or GOSST ) was created in 2020 to work with the open source community at large, with the goal of making the open source software that everyone relies on more secure.
Chris: What kinds of activities is the GOSST involved in? Meder: The range of initiatives that the team is involved in recognizes the diversity of the ecosystem and unique challenges that projects face on their security journey. For example, our sponsorship of sos.dev ensures that developers are rewarded for security improvements to open source projects, whilst the long term work on improving Linux kernel security tackles specific kernel-related vulnerability classes. Many of the projects GOSST is involved with aim to make it easier for developers to improve security through automated assessment (Scorecards and Allstar) and vulnerability discovery tools (OSS-Fuzz, ClusterFuzzLite, FuzzIntrospector), in addition to contributing to infrastructure to make adopting certain best practices easier. Two great examples of best practice efforts are Sigstore for artifact signing and OSV for automated vulnerability management.
Chris: The usage of open source software has exploded in the last decade, but supply-chain hygiene and best practices has seemingly not kept up. How does GOSST see this issue and what approaches is it taking to ensure that past and future systems are resilient? Meder: Every open source ecosystem is a complex environment and that awareness informs our approaches in this space. There are, of course, no silver bullets , and long-lasting and material supply-chain improvements requires infrastructure and tools that will make lives of open source developers easier, all whilst improving the state of the wider software supply chain. As part of a broader effort, we created the Supply-chain Levels for Software Artifacts framework that has been used internally at Google to protect production workloads. This framework describes the best practices for source code and binary artifact integrity, and we are engaging with the community on its refinement and adoption. Here, package managers (such as PyPI, Maven Central, Debian, etc.) are an essential link in the software supply chain due to their near-universal adoption; users do not download and compile their own software anymore. GOSST is starting to work with package managers to explore ways to collaborate together on improving the state of the supply chain and helping package maintainers and application developers do better all with the understanding that many open source projects are developed in spare time as a hobby! Solutions like this, which are the result of collaboration between GOSST and GitHub, are very encouraging as they demonstrate a way to materially strengthen software supply chain security with readily available tools, while also improving development workflows. For GOSST, the problem of supply chain security also covers vulnerability management and solutions to make it easier for everyone to discover known vulnerabilities in open source packages in a scalable and automated way. This has been difficult in the past due to lack of consistently high-quality data in an easily-consumable format. To address this, we re working on infrastructure (OSV.dev) to make vulnerability data more easily accessible as well as a widely adopted and automation friendly data format.
Chris: How does the Reproducible Builds effort help GOSST achieve its goals? Meder: Build reproducibility has a lot of attributes that are attractive as part of generally good build hygiene . As an example, hermeticity, one of requirements to meet SLSA level 4, makes it much easier to reason about the dependencies of a piece of software. This is an enormous benefit during vulnerability or supply chain incident response. On a higher level, however, we always think about reproducibility from the viewpoint of a user and the threats that reproducibility protects them from. Here, a lot of progress has been made, of course, but a lot of work remains to make reproducibility part of everyone s software consumption practices.
Chris: So if someone wanted to know more about GOSST or follow the team s work, where might they go to look? Meder: We post regular updates on Google s Security Blog and on the Linux hardening mailing list. We also welcome community participation in the projects we work on! See any of the projects linked above or OpenSSF s GitHub projects page for a list of efforts you can contribute to directly if you want to get involved in strengthening the open source ecosystem.
Chris: Thanks for taking the time to talk to us today. Meder: No problem. :)


For more information about the Reproducible Builds project, please see our website at reproducible-builds.org. If you are interested in ensuring the ongoing security of the software that underpins our civilisation and wish to sponsor the Reproducible Builds project, please reach out to the project by emailing contact@reproducible-builds.org.

1 April 2022

Russ Allbery: Review: Princess Floralinda and the Forty-Flight Tower

Review: Princess Floralinda and the Forty-Flight Tower, by Tamsyn Muir
Publisher: Subterranean Press
Copyright: 2020
ISBN: 1-59606-992-9
Format: Kindle
Pages: 111
A witch put Princess Floralinda at the top of a forty-flight tower, but it wasn't personal. This is just what witches do, particularly with princesses with butter-coloured curls and sapphire-blue eyes. Princes would come from miles around to battle up the floors of the tower and rescue the princess. The witch even helpfully provided a golden sword, in case a prince didn't care that much about princesses. Floralinda was provided with water and milk, two loaves of bread, and an orange, all of them magically renewing, to sustain her while she waited. In retrospect, the dragon with diamond-encrusted scales on the first floor may have been a mistake. None of the princely endeavors ever saw the second floor. The diary that Floralinda found in her room indicated that she may not be the first princess to have failed to be rescued from this tower. Floralinda finally reaches the rather astonishing conclusion that she might have to venture down the tower herself, despite the goblins she was warned were on the 39th floor (not to mention all the other monsters). The result of that short adventure, after some fast thinking, a great deal of luck, and an unforeseen assist from her magical food, is a surprising number of dead goblins. Also seriously infected hand wounds, because it wouldn't be a Tamsyn Muir story without wasting illness and body horror. That probably would have been the end of Floralinda, except a storm blew a bottom-of-the-garden fairy in through the window, sufficiently injured that she and Floralinda were stuck with each other, at least temporarily. Cobweb, the fairy, is neither kind nor inclined to help Floralinda (particularly given that Floralinda is not a child whose mother is currently in hospital), but it is an amateur chemist and finds both Floralinda's tears and magical food intriguing. Cobweb's magic is also based on wishes, and after a few failed attempts, Floralinda manages to make a wish that takes hold. Whether she'll regret the results is another question. This is a fairly short novella by the same author as Gideon the Ninth, but it's in a different universe and quite different in tone. This summary doesn't capture the writing style, which is a hard-to-describe mix of fairy tale, children's story, and slightly archaic and long-winded sentence construction. This is probably easier to show with a quote:
"You are displaying a very small-minded attitude," said the fairy, who seemed genuinely grieved by this. "Consider the orange-peel, which by itself has many very nice properties. Now, if you had a more educated brain (I cannot consider myself educated; I have only attempted to better my situation) you would have immediately said, 'Why, if I had some liquor, or even very hot water, I could extract some oil from this orange-peel, which as everyone knows is antibacterial; that may well do my hands some good,' and you wouldn't be in such a stupid predicament."
On balance, I think this style worked. It occasionally annoyed me, but it has some charm. About halfway through, I was finding the story lightly entertaining, although I would have preferred a bit less grime, illness, and physical injury. Unfortunately, the rest of the story didn't work for me. The dynamic between Floralinda and Cobweb turns into a sort of D&D progression through monster fights, and while there are some creative twists to those fights, they become all of a sameness. And while I won't spoil the ending, it didn't work for me. I think I see what Muir was trying to do, and I have some intellectual appreciation for the idea, but it wasn't emotionally satisfying. I think my root problem with this story is that Muir sets up a rather interesting world, one in which witches artistically imprison princesses, and particularly bright princesses (with the help of amateur chemist fairies) can use the trappings of a magical tower in ways the witch never intended. I liked that; it has a lot of potential. But I didn't feel like that potential went anywhere satisfying. There is some relationship and characterization work, and it reached some resolution, but it didn't go as far as I wanted. And, most significantly, I found the end point the characters reached in relation to the world to be deeply unsatisfying and vaguely irritating. I wanted to like this more than I did. I think there's a story idea in here that I would have enjoyed more. Unfortunately, it's not the one that Muir wrote, and since so much of my problem is with the ending, I can't provide much guidance on whether someone else would like this story better (and why). But if the idea of taking apart a fairy-tale tower and repurposing the pieces sounds appealing, and if you get along better with Muir's illness motif than I do, you may enjoy this more than I did. Rating: 5 out of 10

29 March 2022

Raphaël Hertzog: Join Freexian to help improve Debian

Freexian has set itself new ambitious goals in support of Debian and we would like to expand our team to help us reach those goals. We have drafted a mission statement to clarify our purpose and our values, and we hope to be able to attract talented software developers, entrepreneurs and Debian experts from our community.
Freexian s mission is to help Debian evolve to be the leading Linux distribution and a model to follow in the free software world.We want to achieve that by enabling passionate contributors to spend most of their time working on Debian and free software, combining lucrative work in support of enterprise customers, core contributions to Debian s processes, and personal goals.Extract from Freexian s mission statement
If Debian is an important part of your life, if improving Debian is something that you enjoy doing, if you have enough skills and Debian expertise to match some of the roles that we have described on our Join us page, then we would love to hear from you at gerants@freexian.com, so we can figure out a way to work together.

17 March 2022

Gunnar Wolf: Speaking about the OpenPGP WoT on LibrePlanet this Saturday

So, LibrePlanet, the FSF s conference, is coming! I much enjoyed attending this conference in person in March 2018. This year I submitted a talk again, and it got accepted of course, given the conference is still 100% online, I doubt I will be able to go 100% conference-mode (I hope to catch a couple of other talks, but well, we are all eager to go back to how things were before 2020!)

Anyway, what is my talk about? My talk is titled Current challenges for the OpenPGP keyserver network. Is there a way forward?. The abstract I submitted follows:
Many free projects use OpenPGP encryption or signatures for various important tasks, like defining membership, authenticating participation, asserting identity over a vote, etc. The Web-of-Trust upon which its operation is based is a model many of us hold dear, allowing for a decentralized way to assign trust to the identity of a given person. But both the Web-of-Trust model and the software that serves as a basis for the above mentioned uses are at risk due to attacks on the key distribution protocol (not on the software itself!) With this talk, I will try to bring awareness to this situation, to some possible mitigations, and present some proposals to allow for the decentralized model to continue to thrive towards the future.
I am on the third semester of my PhD, trying to somehow keep a decentralized infrastructure for the OpenPGP Web of Trust viable and usable for the future. While this is still in the early stages of my PhD work (and I still don t have a solution to present), I will talk about what the main problems are and will sketch out the path I intend to develop. What is the relevance? Mainly, I think, that many free software projects use the OpenPGP Web of Trust for their identity definitions Are we anachronistic? Are we using tools unfit for this century? I don t think so. I think we are in time to fix the main sore spots for this great example of a decentralized infrastructure.

When is my talk scheduled? This Saturday, 2022.03.19, at
GMT / UTC time
19:25 20:10
Conference schedule time (EDT/GMT-4)
15:25 16:10
Mexico City time (GMT-6)
13:25 14:10

How to watch it? The streams are open online. I will be talking in the Saturn room, feel free to just appear there and watch! The FSF asks people to [register for the conference](https://my.fsf.org/civicrm/event/info?reset=1&id=99) beforehand, in order to be able to have an active participation (i.e. ask questions and that). Of course, you might be interested in other talks take a look at the schedule! LibrePlanet keeps a video archive of their past conferences, and this talk will be linked from there. Of course, I will link to the recording once it is online. Update: As of 2022.03.30, LibrePlanet has posted the videos for all of their talks, all linked from the program. And of course, for convenience, I copied the talk over here: Current challenges for the OpenPGP keyserver network. Is there a way forward?

16 February 2022

Robert McQueen: Forward the Foundation

Earlier this week, Neil McGovern announced that he is due to be stepping down as the Executive Director as the GNOME Foundation later this year. As the President of the board and Neil s effective manager together with the Executive Committee, I wanted to take a moment to reflect on his achievements in the past 5 years and explain a little about what the next steps would be. Since joining in 2017, Neil has overseen a productive period of growth and maturity for the Foundation, increasing our influence both within the GNOME project and the wider Free and Open Source Software community. Here s a few highlights of what he s achieved together with the Foundation team and the community: Recognizing and appreciating the amazing progress that GNOME has made with Neil s support, the search for a new Executive Director provides the opportunity for the Foundation board to set the agenda and next high-level goals we d like to achieve together with our new Executive Director. In terms of the desktop, applications, technology, design and development processes, whilst there are always improvements to be made, the board s general feeling is that thanks to the work of our amazing community of contributors, GNOME is doing very well in terms of what we produce and publish. Recent desktop releases have looked great, highly polished and well-received, and the application ecosystem is growing and improving through new developers and applications bringing great energy at the moment. From here, our largest opportunity in terms of growing the community and our user base is being able to articulate the benefits of what we ve produced to a wider public audience, and deliver impact which allows us to secure and grow new and sustainable sources of funding. For individuals, we are able to offer an exceedingly high quality desktop experience and a broad range of powerful applications which are affordable to all, backed by a nonprofit which can be trusted to look after your data, digital security and your best interests as an individual. From the perspective of being a public charity in the US, we also have the opportunity to establish programs that draw upon our community, technology and products to deliver impact such as developing employable skills, incubating new Open Source contributors, learning to program and more. For our next Executive Director, we will be looking for an individual with existing experience in that nonprofit landscape, ideally with prior experience establishing and raising funds for programs that deliver impact through technology, and appreciation for the values that bring people to Free, Open Source and other Open Culture organizations. Working closely with the existing members, contributors, volunteers and whole GNOME community, and managing our relationships with the Advisory Board and other key partners, we hope to find a candidate that can build public awareness and help people learn about, use and benefit from what GNOME has built over the past two decades. Neil has agreed to stay in his position for a 6 month transition period, during which he will support the board in our search for a new Executive Director and support a smooth hand-over. Over the coming weeks we will publish the job description for the new ED, and establish a search committee who will be responsible for sourcing and interviewing candidates to make a recommendation to the board for Neil s successor a hard act to follow! I m confident the community will join me and the board in personally thanking Neil for his 5 years of dedicated service in support of GNOME and the Foundation. Should you have any queries regarding the process, or offers of assistance in the coming hiring process, please don t hesitate to join the discussion or reach out directly to the board.

Next.

Previous.